2025-08-29 17:00:31.273259 | Job console starting 2025-08-29 17:00:31.282699 | Updating git repos 2025-08-29 17:00:31.340928 | Cloning repos into workspace 2025-08-29 17:00:31.573551 | Restoring repo states 2025-08-29 17:00:31.603373 | Merging changes 2025-08-29 17:00:31.603412 | Checking out repos 2025-08-29 17:00:31.828854 | Preparing playbooks 2025-08-29 17:00:32.476258 | Running Ansible setup 2025-08-29 17:00:36.753061 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 17:00:37.489434 | 2025-08-29 17:00:37.489629 | PLAY [Base pre] 2025-08-29 17:00:37.507253 | 2025-08-29 17:00:37.507403 | TASK [Setup log path fact] 2025-08-29 17:00:37.538802 | orchestrator | ok 2025-08-29 17:00:37.557849 | 2025-08-29 17:00:37.558005 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 17:00:37.620946 | orchestrator | ok 2025-08-29 17:00:37.635100 | 2025-08-29 17:00:37.635269 | TASK [emit-job-header : Print job information] 2025-08-29 17:00:37.692036 | # Job Information 2025-08-29 17:00:37.692320 | Ansible Version: 2.16.14 2025-08-29 17:00:37.692382 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-08-29 17:00:37.692444 | Pipeline: post 2025-08-29 17:00:37.692485 | Executor: 521e9411259a 2025-08-29 17:00:37.692521 | Triggered by: https://github.com/osism/testbed/commit/acd65c675baf60c682b1bad8eff50e1653a0fc67 2025-08-29 17:00:37.692561 | Event ID: 89960ed8-84e1-11f0-9ccc-c33f1a7dd345 2025-08-29 17:00:37.702535 | 2025-08-29 17:00:37.702669 | LOOP [emit-job-header : Print node information] 2025-08-29 17:00:37.843044 | orchestrator | ok: 2025-08-29 17:00:37.843411 | orchestrator | # Node Information 2025-08-29 17:00:37.843475 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 17:00:37.843522 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 17:00:37.843562 | orchestrator | Username: zuul-testbed03 2025-08-29 17:00:37.843599 | orchestrator | Distro: Debian 12.11 2025-08-29 17:00:37.843643 | orchestrator | Provider: static-testbed 2025-08-29 17:00:37.843682 | orchestrator | Region: 2025-08-29 17:00:37.843718 | orchestrator | Label: testbed-orchestrator 2025-08-29 17:00:37.843751 | orchestrator | Product Name: OpenStack Nova 2025-08-29 17:00:37.843782 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 17:00:37.873025 | 2025-08-29 17:00:37.873275 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 17:00:38.365442 | orchestrator -> localhost | changed 2025-08-29 17:00:38.374184 | 2025-08-29 17:00:38.374313 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 17:00:39.421632 | orchestrator -> localhost | changed 2025-08-29 17:00:39.436435 | 2025-08-29 17:00:39.436561 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 17:00:39.713949 | orchestrator -> localhost | ok 2025-08-29 17:00:39.721490 | 2025-08-29 17:00:39.721623 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 17:00:39.753768 | orchestrator | ok 2025-08-29 17:00:39.771548 | orchestrator | included: /var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 17:00:39.779689 | 2025-08-29 17:00:39.779785 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 17:00:41.369030 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 17:00:41.369515 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/work/d6c29d1409ff413595935ce080b46e42_id_rsa 2025-08-29 17:00:41.369593 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/work/d6c29d1409ff413595935ce080b46e42_id_rsa.pub 2025-08-29 17:00:41.369644 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 17:00:41.369689 | orchestrator -> localhost | SHA256:GI444cc/Mj6iZH80r3VLuxMNsS1Sh7FVR1as6ltsnYY zuul-build-sshkey 2025-08-29 17:00:41.369731 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 17:00:41.369788 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 17:00:41.369827 | orchestrator -> localhost | | .o....=o| 2025-08-29 17:00:41.369866 | orchestrator -> localhost | | +o. o .| 2025-08-29 17:00:41.369902 | orchestrator -> localhost | | . . ..= . | 2025-08-29 17:00:41.369938 | orchestrator -> localhost | | . + o + + . . | 2025-08-29 17:00:41.369973 | orchestrator -> localhost | | + + o S + . | 2025-08-29 17:00:41.370019 | orchestrator -> localhost | | o + . ......| 2025-08-29 17:00:41.370056 | orchestrator -> localhost | | o + =. o.. E+o.| 2025-08-29 17:00:41.370092 | orchestrator -> localhost | |o o..+.oo.o .o. | 2025-08-29 17:00:41.370129 | orchestrator -> localhost | |.. ooo. +o .. | 2025-08-29 17:00:41.370183 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 17:00:41.370282 | orchestrator -> localhost | ok: Runtime: 0:00:01.035060 2025-08-29 17:00:41.381368 | 2025-08-29 17:00:41.381496 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 17:00:41.416803 | orchestrator | ok 2025-08-29 17:00:41.430883 | orchestrator | included: /var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 17:00:41.440811 | 2025-08-29 17:00:41.440909 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 17:00:41.464749 | orchestrator | skipping: Conditional result was False 2025-08-29 17:00:41.476574 | 2025-08-29 17:00:41.476701 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 17:00:42.092353 | orchestrator | changed 2025-08-29 17:00:42.101241 | 2025-08-29 17:00:42.101384 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 17:00:42.383129 | orchestrator | ok 2025-08-29 17:00:42.391878 | 2025-08-29 17:00:42.392041 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 17:00:42.820266 | orchestrator | ok 2025-08-29 17:00:42.828899 | 2025-08-29 17:00:42.829040 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 17:00:43.274071 | orchestrator | ok 2025-08-29 17:00:43.283613 | 2025-08-29 17:00:43.283750 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 17:00:43.319034 | orchestrator | skipping: Conditional result was False 2025-08-29 17:00:43.332679 | 2025-08-29 17:00:43.332828 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 17:00:43.805275 | orchestrator -> localhost | changed 2025-08-29 17:00:43.828426 | 2025-08-29 17:00:43.829602 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 17:00:44.173564 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/work/d6c29d1409ff413595935ce080b46e42_id_rsa (zuul-build-sshkey) 2025-08-29 17:00:44.173865 | orchestrator -> localhost | ok: Runtime: 0:00:00.017331 2025-08-29 17:00:44.181727 | 2025-08-29 17:00:44.181848 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 17:00:44.607523 | orchestrator | ok 2025-08-29 17:00:44.614685 | 2025-08-29 17:00:44.614855 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 17:00:44.641790 | orchestrator | skipping: Conditional result was False 2025-08-29 17:00:44.704396 | 2025-08-29 17:00:44.704545 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 17:00:45.128259 | orchestrator | ok 2025-08-29 17:00:45.143235 | 2025-08-29 17:00:45.143365 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 17:00:45.190609 | orchestrator | ok 2025-08-29 17:00:45.200950 | 2025-08-29 17:00:45.201076 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 17:00:45.519956 | orchestrator -> localhost | ok 2025-08-29 17:00:45.535829 | 2025-08-29 17:00:45.535966 | TASK [validate-host : Collect information about the host] 2025-08-29 17:00:46.778473 | orchestrator | ok 2025-08-29 17:00:46.795489 | 2025-08-29 17:00:46.795614 | TASK [validate-host : Sanitize hostname] 2025-08-29 17:00:46.860772 | orchestrator | ok 2025-08-29 17:00:46.868751 | 2025-08-29 17:00:46.868870 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 17:00:47.420505 | orchestrator -> localhost | changed 2025-08-29 17:00:47.433548 | 2025-08-29 17:00:47.433686 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 17:00:47.874980 | orchestrator | ok 2025-08-29 17:00:47.883583 | 2025-08-29 17:00:47.883719 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 17:00:48.487403 | orchestrator -> localhost | changed 2025-08-29 17:00:48.498671 | 2025-08-29 17:00:48.498779 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 17:00:48.778933 | orchestrator | ok 2025-08-29 17:00:48.788250 | 2025-08-29 17:00:48.788368 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 17:01:10.300271 | orchestrator | changed: 2025-08-29 17:01:10.300572 | orchestrator | .d..t...... src/ 2025-08-29 17:01:10.300624 | orchestrator | .d..t...... src/github.com/ 2025-08-29 17:01:10.300660 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 17:01:10.300691 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 17:01:10.300721 | orchestrator | RedHat.yml 2025-08-29 17:01:10.315416 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 17:01:10.315434 | orchestrator | RedHat.yml 2025-08-29 17:01:10.315486 | orchestrator | = 1.53.0"... 2025-08-29 17:01:26.614951 | orchestrator | 17:01:26.614 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-08-29 17:01:26.651131 | orchestrator | 17:01:26.651 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-08-29 17:01:27.196128 | orchestrator | 17:01:27.195 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 17:01:27.809802 | orchestrator | 17:01:27.809 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 17:01:28.174594 | orchestrator | 17:01:28.174 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 17:01:29.140998 | orchestrator | 17:01:29.140 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 17:01:32.190890 | orchestrator | 17:01:32.190 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 17:01:33.027430 | orchestrator | 17:01:33.027 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 17:01:33.027582 | orchestrator | 17:01:33.027 STDOUT terraform: Providers are signed by their developers. 2025-08-29 17:01:33.027591 | orchestrator | 17:01:33.027 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 17:01:33.027606 | orchestrator | 17:01:33.027 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 17:01:33.027611 | orchestrator | 17:01:33.027 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 17:01:33.027622 | orchestrator | 17:01:33.027 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 17:01:33.027667 | orchestrator | 17:01:33.027 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 17:01:33.027686 | orchestrator | 17:01:33.027 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 17:01:33.028167 | orchestrator | 17:01:33.028 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 17:01:33.028211 | orchestrator | 17:01:33.028 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 17:01:33.028220 | orchestrator | 17:01:33.028 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 17:01:33.028225 | orchestrator | 17:01:33.028 STDOUT terraform: should now work. 2025-08-29 17:01:33.028256 | orchestrator | 17:01:33.028 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 17:01:33.028325 | orchestrator | 17:01:33.028 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 17:01:33.028682 | orchestrator | 17:01:33.028 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 17:01:33.120455 | orchestrator | 17:01:33.120 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 17:01:33.120516 | orchestrator | 17:01:33.120 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 17:01:33.295916 | orchestrator | 17:01:33.295 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 17:01:33.296015 | orchestrator | 17:01:33.295 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 17:01:33.296031 | orchestrator | 17:01:33.295 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 17:01:33.296042 | orchestrator | 17:01:33.295 STDOUT terraform: for this configuration. 2025-08-29 17:01:33.432653 | orchestrator | 17:01:33.432 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 17:01:33.432739 | orchestrator | 17:01:33.432 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 17:01:33.533526 | orchestrator | 17:01:33.533 STDOUT terraform: ci.auto.tfvars 2025-08-29 17:01:33.538036 | orchestrator | 17:01:33.537 STDOUT terraform: default_custom.tf 2025-08-29 17:01:33.651573 | orchestrator | 17:01:33.651 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 17:01:34.639818 | orchestrator | 17:01:34.639 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 17:01:35.158392 | orchestrator | 17:01:35.158 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 17:01:35.357800 | orchestrator | 17:01:35.357 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 17:01:35.357893 | orchestrator | 17:01:35.357 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 17:01:35.357906 | orchestrator | 17:01:35.357 STDOUT terraform:  + create 2025-08-29 17:01:35.357916 | orchestrator | 17:01:35.357 STDOUT terraform:  <= read (data resources) 2025-08-29 17:01:35.357925 | orchestrator | 17:01:35.357 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 17:01:35.358129 | orchestrator | 17:01:35.358 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 17:01:35.358148 | orchestrator | 17:01:35.358 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 17:01:35.358160 | orchestrator | 17:01:35.358 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 17:01:35.358194 | orchestrator | 17:01:35.358 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 17:01:35.358219 | orchestrator | 17:01:35.358 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 17:01:35.358253 | orchestrator | 17:01:35.358 STDOUT terraform:  + file = (known after apply) 2025-08-29 17:01:35.358391 | orchestrator | 17:01:35.358 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.358403 | orchestrator | 17:01:35.358 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.358438 | orchestrator | 17:01:35.358 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 17:01:35.358453 | orchestrator | 17:01:35.358 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 17:01:35.358467 | orchestrator | 17:01:35.358 STDOUT terraform:  + most_recent = true 2025-08-29 17:01:35.358480 | orchestrator | 17:01:35.358 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.358499 | orchestrator | 17:01:35.358 STDOUT terraform:  + protected = (known after apply) 2025-08-29 17:01:35.358512 | orchestrator | 17:01:35.358 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.358526 | orchestrator | 17:01:35.358 STDOUT terraform:  + schema = (known after apply) 2025-08-29 17:01:35.358541 | orchestrator | 17:01:35.358 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 17:01:35.358559 | orchestrator | 17:01:35.358 STDOUT terraform:  + tags = (known after apply) 2025-08-29 17:01:35.358575 | orchestrator | 17:01:35.358 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 17:01:35.358584 | orchestrator | 17:01:35.358 STDOUT terraform:  } 2025-08-29 17:01:35.358753 | orchestrator | 17:01:35.358 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 17:01:35.358768 | orchestrator | 17:01:35.358 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 17:01:35.358793 | orchestrator | 17:01:35.358 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 17:01:35.358825 | orchestrator | 17:01:35.358 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 17:01:35.358848 | orchestrator | 17:01:35.358 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 17:01:35.358881 | orchestrator | 17:01:35.358 STDOUT terraform:  + file = (known after apply) 2025-08-29 17:01:35.358903 | orchestrator | 17:01:35.358 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.358937 | orchestrator | 17:01:35.358 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.358989 | orchestrator | 17:01:35.358 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 17:01:35.359002 | orchestrator | 17:01:35.358 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 17:01:35.359025 | orchestrator | 17:01:35.358 STDOUT terraform:  + most_recent = true 2025-08-29 17:01:35.359036 | orchestrator | 17:01:35.359 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.359074 | orchestrator | 17:01:35.359 STDOUT terraform:  + protected = (known after apply) 2025-08-29 17:01:35.359106 | orchestrator | 17:01:35.359 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.359119 | orchestrator | 17:01:35.359 STDOUT terraform:  + schema = (known after apply) 2025-08-29 17:01:35.359154 | orchestrator | 17:01:35.359 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 17:01:35.359194 | orchestrator | 17:01:35.359 STDOUT terraform:  + tags = (known after apply) 2025-08-29 17:01:35.359207 | orchestrator | 17:01:35.359 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 17:01:35.359218 | orchestrator | 17:01:35.359 STDOUT terraform:  } 2025-08-29 17:01:35.359666 | orchestrator | 17:01:35.359 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 17:01:35.359720 | orchestrator | 17:01:35.359 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 17:01:35.359737 | orchestrator | 17:01:35.359 STDOUT terraform:  + content = (known after apply) 2025-08-29 17:01:35.359756 | orchestrator | 17:01:35.359 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 17:01:35.359773 | orchestrator | 17:01:35.359 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 17:01:35.359817 | orchestrator | 17:01:35.359 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 17:01:35.359851 | orchestrator | 17:01:35.359 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 17:01:35.359882 | orchestrator | 17:01:35.359 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 17:01:35.359920 | orchestrator | 17:01:35.359 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 17:01:35.359944 | orchestrator | 17:01:35.359 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 17:01:35.359991 | orchestrator | 17:01:35.359 STDOUT terraform:  + file_permission = "0644" 2025-08-29 17:01:35.360026 | orchestrator | 17:01:35.359 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 17:01:35.360064 | orchestrator | 17:01:35.360 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.360077 | orchestrator | 17:01:35.360 STDOUT terraform:  } 2025-08-29 17:01:35.360210 | orchestrator | 17:01:35.360 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 17:01:35.360224 | orchestrator | 17:01:35.360 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 17:01:35.360265 | orchestrator | 17:01:35.360 STDOUT terraform:  + content = (known after apply) 2025-08-29 17:01:35.360300 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 17:01:35.360333 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 17:01:35.360370 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 17:01:35.360406 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 17:01:35.360441 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 17:01:35.360474 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 17:01:35.360498 | orchestrator | 17:01:35.360 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 17:01:35.360523 | orchestrator | 17:01:35.360 STDOUT terraform:  + file_permission = "0644" 2025-08-29 17:01:35.360554 | orchestrator | 17:01:35.360 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 17:01:35.360702 | orchestrator | 17:01:35.360 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.360716 | orchestrator | 17:01:35.360 STDOUT terraform:  } 2025-08-29 17:01:35.360734 | orchestrator | 17:01:35.360 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 17:01:35.360742 | orchestrator | 17:01:35.360 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 17:01:35.360750 | orchestrator | 17:01:35.360 STDOUT terraform:  + content = (known after apply) 2025-08-29 17:01:35.360771 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 17:01:35.360782 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 17:01:35.360790 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 17:01:35.360801 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 17:01:35.360825 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 17:01:35.360863 | orchestrator | 17:01:35.360 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 17:01:35.360881 | orchestrator | 17:01:35.360 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 17:01:35.360897 | orchestrator | 17:01:35.360 STDOUT terraform:  + file_permission = "0644" 2025-08-29 17:01:35.360930 | orchestrator | 17:01:35.360 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 17:01:35.360962 | orchestrator | 17:01:35.360 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.361013 | orchestrator | 17:01:35.360 STDOUT terraform:  } 2025-08-29 17:01:35.361188 | orchestrator | 17:01:35.361 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 17:01:35.361216 | orchestrator | 17:01:35.361 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 17:01:35.361265 | orchestrator | 17:01:35.361 STDOUT terraform:  + content = (sensitive value) 2025-08-29 17:01:35.361306 | orchestrator | 17:01:35.361 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 17:01:35.361341 | orchestrator | 17:01:35.361 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 17:01:35.361377 | orchestrator | 17:01:35.361 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 17:01:35.361412 | orchestrator | 17:01:35.361 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 17:01:35.361448 | orchestrator | 17:01:35.361 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 17:01:35.361484 | orchestrator | 17:01:35.361 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 17:01:35.361512 | orchestrator | 17:01:35.361 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 17:01:35.361530 | orchestrator | 17:01:35.361 STDOUT terraform:  + file_permission = "0600" 2025-08-29 17:01:35.361566 | orchestrator | 17:01:35.361 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 17:01:35.361590 | orchestrator | 17:01:35.361 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.361608 | orchestrator | 17:01:35.361 STDOUT terraform:  } 2025-08-29 17:01:35.361625 | orchestrator | 17:01:35.361 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 17:01:35.361658 | orchestrator | 17:01:35.361 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 17:01:35.361670 | orchestrator | 17:01:35.361 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.361681 | orchestrator | 17:01:35.361 STDOUT terraform:  } 2025-08-29 17:01:35.361873 | orchestrator | 17:01:35.361 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 17:01:35.361915 | orchestrator | 17:01:35.361 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 17:01:35.361924 | orchestrator | 17:01:35.361 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.361932 | orchestrator | 17:01:35.361 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.361940 | orchestrator | 17:01:35.361 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.361952 | orchestrator | 17:01:35.361 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.361961 | orchestrator | 17:01:35.361 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.362070 | orchestrator | 17:01:35.361 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 17:01:35.362093 | orchestrator | 17:01:35.361 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.362107 | orchestrator | 17:01:35.361 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.362122 | orchestrator | 17:01:35.362 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.362131 | orchestrator | 17:01:35.362 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.362139 | orchestrator | 17:01:35.362 STDOUT terraform:  } 2025-08-29 17:01:35.362322 | orchestrator | 17:01:35.362 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 17:01:35.362365 | orchestrator | 17:01:35.362 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 17:01:35.362405 | orchestrator | 17:01:35.362 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.362421 | orchestrator | 17:01:35.362 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.362459 | orchestrator | 17:01:35.362 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.362493 | orchestrator | 17:01:35.362 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.362527 | orchestrator | 17:01:35.362 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.362571 | orchestrator | 17:01:35.362 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 17:01:35.362605 | orchestrator | 17:01:35.362 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.362627 | orchestrator | 17:01:35.362 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.362643 | orchestrator | 17:01:35.362 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.362671 | orchestrator | 17:01:35.362 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.362686 | orchestrator | 17:01:35.362 STDOUT terraform:  } 2025-08-29 17:01:35.362731 | orchestrator | 17:01:35.362 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 17:01:35.362775 | orchestrator | 17:01:35.362 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 17:01:35.362806 | orchestrator | 17:01:35.362 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.362828 | orchestrator | 17:01:35.362 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.362860 | orchestrator | 17:01:35.362 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.362895 | orchestrator | 17:01:35.362 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.363057 | orchestrator | 17:01:35.362 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.363068 | orchestrator | 17:01:35.362 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 17:01:35.363075 | orchestrator | 17:01:35.362 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.363082 | orchestrator | 17:01:35.362 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.363089 | orchestrator | 17:01:35.363 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.363096 | orchestrator | 17:01:35.363 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.363105 | orchestrator | 17:01:35.363 STDOUT terraform:  } 2025-08-29 17:01:35.363202 | orchestrator | 17:01:35.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 17:01:35.363246 | orchestrator | 17:01:35.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 17:01:35.363281 | orchestrator | 17:01:35.363 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.363313 | orchestrator | 17:01:35.363 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.363343 | orchestrator | 17:01:35.363 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.363379 | orchestrator | 17:01:35.363 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.363414 | orchestrator | 17:01:35.363 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.363458 | orchestrator | 17:01:35.363 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 17:01:35.363492 | orchestrator | 17:01:35.363 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.363503 | orchestrator | 17:01:35.363 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.363532 | orchestrator | 17:01:35.363 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.363555 | orchestrator | 17:01:35.363 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.363566 | orchestrator | 17:01:35.363 STDOUT terraform:  } 2025-08-29 17:01:35.363613 | orchestrator | 17:01:35.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 17:01:35.363656 | orchestrator | 17:01:35.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 17:01:35.363690 | orchestrator | 17:01:35.363 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.363713 | orchestrator | 17:01:35.363 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.363751 | orchestrator | 17:01:35.363 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.363785 | orchestrator | 17:01:35.363 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.363820 | orchestrator | 17:01:35.363 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.363862 | orchestrator | 17:01:35.363 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 17:01:35.363899 | orchestrator | 17:01:35.363 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.363910 | orchestrator | 17:01:35.363 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.363938 | orchestrator | 17:01:35.363 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.363960 | orchestrator | 17:01:35.363 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.364021 | orchestrator | 17:01:35.363 STDOUT terraform:  } 2025-08-29 17:01:35.364030 | orchestrator | 17:01:35.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 17:01:35.364074 | orchestrator | 17:01:35.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 17:01:35.364190 | orchestrator | 17:01:35.364 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.364205 | orchestrator | 17:01:35.364 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.364221 | orchestrator | 17:01:35.364 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.364232 | orchestrator | 17:01:35.364 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.364246 | orchestrator | 17:01:35.364 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.364256 | orchestrator | 17:01:35.364 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 17:01:35.364284 | orchestrator | 17:01:35.364 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.364298 | orchestrator | 17:01:35.364 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.364322 | orchestrator | 17:01:35.364 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.364338 | orchestrator | 17:01:35.364 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.364351 | orchestrator | 17:01:35.364 STDOUT terraform:  } 2025-08-29 17:01:35.364396 | orchestrator | 17:01:35.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 17:01:35.364441 | orchestrator | 17:01:35.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 17:01:35.364475 | orchestrator | 17:01:35.364 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.364489 | orchestrator | 17:01:35.364 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.364528 | orchestrator | 17:01:35.364 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.364561 | orchestrator | 17:01:35.364 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.364597 | orchestrator | 17:01:35.364 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.364639 | orchestrator | 17:01:35.364 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 17:01:35.364675 | orchestrator | 17:01:35.364 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.364686 | orchestrator | 17:01:35.364 STDOUT terraform:  + size = 80 2025-08-29 17:01:35.364705 | orchestrator | 17:01:35.364 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.364733 | orchestrator | 17:01:35.364 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.364743 | orchestrator | 17:01:35.364 STDOUT terraform:  } 2025-08-29 17:01:35.364787 | orchestrator | 17:01:35.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 17:01:35.364829 | orchestrator | 17:01:35.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.364868 | orchestrator | 17:01:35.364 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.364878 | orchestrator | 17:01:35.364 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.364916 | orchestrator | 17:01:35.364 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.364950 | orchestrator | 17:01:35.364 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.364998 | orchestrator | 17:01:35.364 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 17:01:35.365032 | orchestrator | 17:01:35.364 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.365053 | orchestrator | 17:01:35.365 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.365075 | orchestrator | 17:01:35.365 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.365099 | orchestrator | 17:01:35.365 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.365108 | orchestrator | 17:01:35.365 STDOUT terraform:  } 2025-08-29 17:01:35.365156 | orchestrator | 17:01:35.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 17:01:35.365198 | orchestrator | 17:01:35.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.365327 | orchestrator | 17:01:35.365 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.365338 | orchestrator | 17:01:35.365 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.365345 | orchestrator | 17:01:35.365 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.365351 | orchestrator | 17:01:35.365 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.365360 | orchestrator | 17:01:35.365 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 17:01:35.365381 | orchestrator | 17:01:35.365 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.365402 | orchestrator | 17:01:35.365 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.365424 | orchestrator | 17:01:35.365 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.365447 | orchestrator | 17:01:35.365 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.365457 | orchestrator | 17:01:35.365 STDOUT terraform:  } 2025-08-29 17:01:35.365500 | orchestrator | 17:01:35.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 17:01:35.365541 | orchestrator | 17:01:35.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.365576 | orchestrator | 17:01:35.365 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.365595 | orchestrator | 17:01:35.365 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.365630 | orchestrator | 17:01:35.365 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.365666 | orchestrator | 17:01:35.365 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.365702 | orchestrator | 17:01:35.365 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 17:01:35.365738 | orchestrator | 17:01:35.365 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.365748 | orchestrator | 17:01:35.365 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.365786 | orchestrator | 17:01:35.365 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.365796 | orchestrator | 17:01:35.365 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.365805 | orchestrator | 17:01:35.365 STDOUT terraform:  } 2025-08-29 17:01:35.365853 | orchestrator | 17:01:35.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 17:01:35.365893 | orchestrator | 17:01:35.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.365927 | orchestrator | 17:01:35.365 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.365951 | orchestrator | 17:01:35.365 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.366008 | orchestrator | 17:01:35.365 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.366055 | orchestrator | 17:01:35.366 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.366095 | orchestrator | 17:01:35.366 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 17:01:35.366130 | orchestrator | 17:01:35.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.366144 | orchestrator | 17:01:35.366 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.366164 | orchestrator | 17:01:35.366 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.366178 | orchestrator | 17:01:35.366 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.366192 | orchestrator | 17:01:35.366 STDOUT terraform:  } 2025-08-29 17:01:35.366239 | orchestrator | 17:01:35.366 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 17:01:35.366278 | orchestrator | 17:01:35.366 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.366313 | orchestrator | 17:01:35.366 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.366337 | orchestrator | 17:01:35.366 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.366491 | orchestrator | 17:01:35.366 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.366505 | orchestrator | 17:01:35.366 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.366511 | orchestrator | 17:01:35.366 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 17:01:35.366518 | orchestrator | 17:01:35.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.366534 | orchestrator | 17:01:35.366 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.366543 | orchestrator | 17:01:35.366 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.366550 | orchestrator | 17:01:35.366 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.366556 | orchestrator | 17:01:35.366 STDOUT terraform:  } 2025-08-29 17:01:35.366591 | orchestrator | 17:01:35.366 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 17:01:35.366632 | orchestrator | 17:01:35.366 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.366667 | orchestrator | 17:01:35.366 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.366689 | orchestrator | 17:01:35.366 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.366726 | orchestrator | 17:01:35.366 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.366760 | orchestrator | 17:01:35.366 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.366799 | orchestrator | 17:01:35.366 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 17:01:35.366833 | orchestrator | 17:01:35.366 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.366860 | orchestrator | 17:01:35.366 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.366869 | orchestrator | 17:01:35.366 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.366894 | orchestrator | 17:01:35.366 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.366904 | orchestrator | 17:01:35.366 STDOUT terraform:  } 2025-08-29 17:01:35.366949 | orchestrator | 17:01:35.366 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 17:01:35.367024 | orchestrator | 17:01:35.366 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.367036 | orchestrator | 17:01:35.366 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.367045 | orchestrator | 17:01:35.367 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.367086 | orchestrator | 17:01:35.367 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.367122 | orchestrator | 17:01:35.367 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.367159 | orchestrator | 17:01:35.367 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 17:01:35.367194 | orchestrator | 17:01:35.367 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.367204 | orchestrator | 17:01:35.367 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.367234 | orchestrator | 17:01:35.367 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.367249 | orchestrator | 17:01:35.367 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.367262 | orchestrator | 17:01:35.367 STDOUT terraform:  } 2025-08-29 17:01:35.367315 | orchestrator | 17:01:35.367 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 17:01:35.367354 | orchestrator | 17:01:35.367 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.367389 | orchestrator | 17:01:35.367 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.367412 | orchestrator | 17:01:35.367 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.367447 | orchestrator | 17:01:35.367 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.367482 | orchestrator | 17:01:35.367 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.367618 | orchestrator | 17:01:35.367 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 17:01:35.367628 | orchestrator | 17:01:35.367 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.367633 | orchestrator | 17:01:35.367 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.367639 | orchestrator | 17:01:35.367 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.367645 | orchestrator | 17:01:35.367 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.367650 | orchestrator | 17:01:35.367 STDOUT terraform:  } 2025-08-29 17:01:35.367658 | orchestrator | 17:01:35.367 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 17:01:35.367694 | orchestrator | 17:01:35.367 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 17:01:35.367727 | orchestrator | 17:01:35.367 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 17:01:35.367751 | orchestrator | 17:01:35.367 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.367784 | orchestrator | 17:01:35.367 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.367819 | orchestrator | 17:01:35.367 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 17:01:35.367856 | orchestrator | 17:01:35.367 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 17:01:35.367892 | orchestrator | 17:01:35.367 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.367912 | orchestrator | 17:01:35.367 STDOUT terraform:  + size = 20 2025-08-29 17:01:35.367935 | orchestrator | 17:01:35.367 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 17:01:35.367958 | orchestrator | 17:01:35.367 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 17:01:35.367984 | orchestrator | 17:01:35.367 STDOUT terraform:  } 2025-08-29 17:01:35.368032 | orchestrator | 17:01:35.367 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 17:01:35.368068 | orchestrator | 17:01:35.368 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 17:01:35.368101 | orchestrator | 17:01:35.368 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.368135 | orchestrator | 17:01:35.368 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.368168 | orchestrator | 17:01:35.368 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.368203 | orchestrator | 17:01:35.368 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.368225 | orchestrator | 17:01:35.368 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.368241 | orchestrator | 17:01:35.368 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.368275 | orchestrator | 17:01:35.368 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.368309 | orchestrator | 17:01:35.368 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.368342 | orchestrator | 17:01:35.368 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 17:01:35.368372 | orchestrator | 17:01:35.368 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.368402 | orchestrator | 17:01:35.368 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.368443 | orchestrator | 17:01:35.368 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.368469 | orchestrator | 17:01:35.368 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.368504 | orchestrator | 17:01:35.368 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.368524 | orchestrator | 17:01:35.368 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.368557 | orchestrator | 17:01:35.368 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 17:01:35.368580 | orchestrator | 17:01:35.368 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.368615 | orchestrator | 17:01:35.368 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.368739 | orchestrator | 17:01:35.368 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.368753 | orchestrator | 17:01:35.368 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.368759 | orchestrator | 17:01:35.368 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.368765 | orchestrator | 17:01:35.368 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 17:01:35.368771 | orchestrator | 17:01:35.368 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.368777 | orchestrator | 17:01:35.368 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.368784 | orchestrator | 17:01:35.368 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.368802 | orchestrator | 17:01:35.368 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.368832 | orchestrator | 17:01:35.368 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.368859 | orchestrator | 17:01:35.368 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.368896 | orchestrator | 17:01:35.368 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.368906 | orchestrator | 17:01:35.368 STDOUT terraform:  } 2025-08-29 17:01:35.368913 | orchestrator | 17:01:35.368 STDOUT terraform:  + network { 2025-08-29 17:01:35.368936 | orchestrator | 17:01:35.368 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.368985 | orchestrator | 17:01:35.368 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.368996 | orchestrator | 17:01:35.368 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.369030 | orchestrator | 17:01:35.368 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.369059 | orchestrator | 17:01:35.369 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.369088 | orchestrator | 17:01:35.369 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.369118 | orchestrator | 17:01:35.369 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.369127 | orchestrator | 17:01:35.369 STDOUT terraform:  } 2025-08-29 17:01:35.369135 | orchestrator | 17:01:35.369 STDOUT terraform:  } 2025-08-29 17:01:35.369194 | orchestrator | 17:01:35.369 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 17:01:35.369234 | orchestrator | 17:01:35.369 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 17:01:35.369269 | orchestrator | 17:01:35.369 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.369309 | orchestrator | 17:01:35.369 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.369338 | orchestrator | 17:01:35.369 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.369372 | orchestrator | 17:01:35.369 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.369396 | orchestrator | 17:01:35.369 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.369419 | orchestrator | 17:01:35.369 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.369457 | orchestrator | 17:01:35.369 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.369491 | orchestrator | 17:01:35.369 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.369522 | orchestrator | 17:01:35.369 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 17:01:35.369537 | orchestrator | 17:01:35.369 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.369566 | orchestrator | 17:01:35.369 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.369602 | orchestrator | 17:01:35.369 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.369636 | orchestrator | 17:01:35.369 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.369674 | orchestrator | 17:01:35.369 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.369688 | orchestrator | 17:01:35.369 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.369721 | orchestrator | 17:01:35.369 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 17:01:35.369745 | orchestrator | 17:01:35.369 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.369849 | orchestrator | 17:01:35.369 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.369857 | orchestrator | 17:01:35.369 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.369863 | orchestrator | 17:01:35.369 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.369871 | orchestrator | 17:01:35.369 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.369910 | orchestrator | 17:01:35.369 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 17:01:35.369920 | orchestrator | 17:01:35.369 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.369935 | orchestrator | 17:01:35.369 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.369984 | orchestrator | 17:01:35.369 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.370032 | orchestrator | 17:01:35.369 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.370057 | orchestrator | 17:01:35.370 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.370082 | orchestrator | 17:01:35.370 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.370120 | orchestrator | 17:01:35.370 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.370130 | orchestrator | 17:01:35.370 STDOUT terraform:  } 2025-08-29 17:01:35.370137 | orchestrator | 17:01:35.370 STDOUT terraform:  + network { 2025-08-29 17:01:35.370160 | orchestrator | 17:01:35.370 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.370191 | orchestrator | 17:01:35.370 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.370222 | orchestrator | 17:01:35.370 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.370252 | orchestrator | 17:01:35.370 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.370283 | orchestrator | 17:01:35.370 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.370314 | orchestrator | 17:01:35.370 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.370345 | orchestrator | 17:01:35.370 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.370354 | orchestrator | 17:01:35.370 STDOUT terraform:  } 2025-08-29 17:01:35.370362 | orchestrator | 17:01:35.370 STDOUT terraform:  } 2025-08-29 17:01:35.370409 | orchestrator | 17:01:35.370 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 17:01:35.370452 | orchestrator | 17:01:35.370 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 17:01:35.370484 | orchestrator | 17:01:35.370 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.370513 | orchestrator | 17:01:35.370 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.370552 | orchestrator | 17:01:35.370 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.370587 | orchestrator | 17:01:35.370 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.370602 | orchestrator | 17:01:35.370 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.370616 | orchestrator | 17:01:35.370 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.370654 | orchestrator | 17:01:35.370 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.370688 | orchestrator | 17:01:35.370 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.370719 | orchestrator | 17:01:35.370 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 17:01:35.370742 | orchestrator | 17:01:35.370 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.370782 | orchestrator | 17:01:35.370 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.370810 | orchestrator | 17:01:35.370 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.370845 | orchestrator | 17:01:35.370 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.370878 | orchestrator | 17:01:35.370 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.371018 | orchestrator | 17:01:35.370 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.371031 | orchestrator | 17:01:35.370 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 17:01:35.371036 | orchestrator | 17:01:35.370 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.371042 | orchestrator | 17:01:35.370 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.371048 | orchestrator | 17:01:35.370 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.371056 | orchestrator | 17:01:35.371 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.371064 | orchestrator | 17:01:35.371 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.371116 | orchestrator | 17:01:35.371 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 17:01:35.371125 | orchestrator | 17:01:35.371 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.371151 | orchestrator | 17:01:35.371 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.371183 | orchestrator | 17:01:35.371 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.371209 | orchestrator | 17:01:35.371 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.371237 | orchestrator | 17:01:35.371 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.371265 | orchestrator | 17:01:35.371 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.371303 | orchestrator | 17:01:35.371 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.371311 | orchestrator | 17:01:35.371 STDOUT terraform:  } 2025-08-29 17:01:35.371318 | orchestrator | 17:01:35.371 STDOUT terraform:  + network { 2025-08-29 17:01:35.371342 | orchestrator | 17:01:35.371 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.371371 | orchestrator | 17:01:35.371 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.371401 | orchestrator | 17:01:35.371 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.371431 | orchestrator | 17:01:35.371 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.371461 | orchestrator | 17:01:35.371 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.371492 | orchestrator | 17:01:35.371 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.371523 | orchestrator | 17:01:35.371 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.371531 | orchestrator | 17:01:35.371 STDOUT terraform:  } 2025-08-29 17:01:35.371539 | orchestrator | 17:01:35.371 STDOUT terraform:  } 2025-08-29 17:01:35.371586 | orchestrator | 17:01:35.371 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 17:01:35.371628 | orchestrator | 17:01:35.371 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 17:01:35.371665 | orchestrator | 17:01:35.371 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.371699 | orchestrator | 17:01:35.371 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.371731 | orchestrator | 17:01:35.371 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.371761 | orchestrator | 17:01:35.371 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.371786 | orchestrator | 17:01:35.371 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.371799 | orchestrator | 17:01:35.371 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.371839 | orchestrator | 17:01:35.371 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.371864 | orchestrator | 17:01:35.371 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.371897 | orchestrator | 17:01:35.371 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 17:01:35.371913 | orchestrator | 17:01:35.371 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.371945 | orchestrator | 17:01:35.371 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.371988 | orchestrator | 17:01:35.371 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.372096 | orchestrator | 17:01:35.371 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.372104 | orchestrator | 17:01:35.372 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.372110 | orchestrator | 17:01:35.372 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.372118 | orchestrator | 17:01:35.372 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 17:01:35.372130 | orchestrator | 17:01:35.372 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.372141 | orchestrator | 17:01:35.372 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.372183 | orchestrator | 17:01:35.372 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.372197 | orchestrator | 17:01:35.372 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.372229 | orchestrator | 17:01:35.372 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.372283 | orchestrator | 17:01:35.372 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 17:01:35.372304 | orchestrator | 17:01:35.372 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.372325 | orchestrator | 17:01:35.372 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.372353 | orchestrator | 17:01:35.372 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.372388 | orchestrator | 17:01:35.372 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.372409 | orchestrator | 17:01:35.372 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.372439 | orchestrator | 17:01:35.372 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.372490 | orchestrator | 17:01:35.372 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.372506 | orchestrator | 17:01:35.372 STDOUT terraform:  } 2025-08-29 17:01:35.372513 | orchestrator | 17:01:35.372 STDOUT terraform:  + network { 2025-08-29 17:01:35.372545 | orchestrator | 17:01:35.372 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.377219 | orchestrator | 17:01:35.372 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.377265 | orchestrator | 17:01:35.377 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.377284 | orchestrator | 17:01:35.377 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.377317 | orchestrator | 17:01:35.377 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.377348 | orchestrator | 17:01:35.377 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.377381 | orchestrator | 17:01:35.377 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.377389 | orchestrator | 17:01:35.377 STDOUT terraform:  } 2025-08-29 17:01:35.377406 | orchestrator | 17:01:35.377 STDOUT terraform:  } 2025-08-29 17:01:35.377450 | orchestrator | 17:01:35.377 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 17:01:35.377494 | orchestrator | 17:01:35.377 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 17:01:35.378345 | orchestrator | 17:01:35.377 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.378373 | orchestrator | 17:01:35.378 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.378416 | orchestrator | 17:01:35.378 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.378467 | orchestrator | 17:01:35.378 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.378477 | orchestrator | 17:01:35.378 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.378503 | orchestrator | 17:01:35.378 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.378539 | orchestrator | 17:01:35.378 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.378571 | orchestrator | 17:01:35.378 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.378604 | orchestrator | 17:01:35.378 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 17:01:35.378628 | orchestrator | 17:01:35.378 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.378660 | orchestrator | 17:01:35.378 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.378695 | orchestrator | 17:01:35.378 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.378729 | orchestrator | 17:01:35.378 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.378761 | orchestrator | 17:01:35.378 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.378786 | orchestrator | 17:01:35.378 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.378816 | orchestrator | 17:01:35.378 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 17:01:35.378841 | orchestrator | 17:01:35.378 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.378939 | orchestrator | 17:01:35.378 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.378945 | orchestrator | 17:01:35.378 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.378949 | orchestrator | 17:01:35.378 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.378955 | orchestrator | 17:01:35.378 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.379008 | orchestrator | 17:01:35.378 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 17:01:35.379017 | orchestrator | 17:01:35.378 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.379041 | orchestrator | 17:01:35.379 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.379068 | orchestrator | 17:01:35.379 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.379096 | orchestrator | 17:01:35.379 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.379123 | orchestrator | 17:01:35.379 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.379154 | orchestrator | 17:01:35.379 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.379191 | orchestrator | 17:01:35.379 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.379198 | orchestrator | 17:01:35.379 STDOUT terraform:  } 2025-08-29 17:01:35.379215 | orchestrator | 17:01:35.379 STDOUT terraform:  + network { 2025-08-29 17:01:35.379237 | orchestrator | 17:01:35.379 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.379268 | orchestrator | 17:01:35.379 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.379297 | orchestrator | 17:01:35.379 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.379327 | orchestrator | 17:01:35.379 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.379360 | orchestrator | 17:01:35.379 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.379390 | orchestrator | 17:01:35.379 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.379422 | orchestrator | 17:01:35.379 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.379428 | orchestrator | 17:01:35.379 STDOUT terraform:  } 2025-08-29 17:01:35.379446 | orchestrator | 17:01:35.379 STDOUT terraform:  } 2025-08-29 17:01:35.379536 | orchestrator | 17:01:35.379 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 17:01:35.379572 | orchestrator | 17:01:35.379 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 17:01:35.379605 | orchestrator | 17:01:35.379 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.379638 | orchestrator | 17:01:35.379 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.379672 | orchestrator | 17:01:35.379 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.379706 | orchestrator | 17:01:35.379 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.379730 | orchestrator | 17:01:35.379 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.379743 | orchestrator | 17:01:35.379 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.379787 | orchestrator | 17:01:35.379 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.379814 | orchestrator | 17:01:35.379 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.379841 | orchestrator | 17:01:35.379 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 17:01:35.379863 | orchestrator | 17:01:35.379 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.379897 | orchestrator | 17:01:35.379 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.379931 | orchestrator | 17:01:35.379 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.380027 | orchestrator | 17:01:35.379 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.380038 | orchestrator | 17:01:35.379 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.380042 | orchestrator | 17:01:35.379 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.380047 | orchestrator | 17:01:35.380 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 17:01:35.380071 | orchestrator | 17:01:35.380 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.380105 | orchestrator | 17:01:35.380 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.380138 | orchestrator | 17:01:35.380 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.380160 | orchestrator | 17:01:35.380 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.380193 | orchestrator | 17:01:35.380 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.380240 | orchestrator | 17:01:35.380 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 17:01:35.380257 | orchestrator | 17:01:35.380 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.380281 | orchestrator | 17:01:35.380 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.380308 | orchestrator | 17:01:35.380 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.380336 | orchestrator | 17:01:35.380 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.380364 | orchestrator | 17:01:35.380 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.380391 | orchestrator | 17:01:35.380 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.380428 | orchestrator | 17:01:35.380 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.380434 | orchestrator | 17:01:35.380 STDOUT terraform:  } 2025-08-29 17:01:35.380451 | orchestrator | 17:01:35.380 STDOUT terraform:  + network { 2025-08-29 17:01:35.380470 | orchestrator | 17:01:35.380 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.380500 | orchestrator | 17:01:35.380 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.380530 | orchestrator | 17:01:35.380 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.380559 | orchestrator | 17:01:35.380 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.380591 | orchestrator | 17:01:35.380 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.380620 | orchestrator | 17:01:35.380 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.380651 | orchestrator | 17:01:35.380 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.380657 | orchestrator | 17:01:35.380 STDOUT terraform:  } 2025-08-29 17:01:35.380673 | orchestrator | 17:01:35.380 STDOUT terraform:  } 2025-08-29 17:01:35.380715 | orchestrator | 17:01:35.380 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 17:01:35.380755 | orchestrator | 17:01:35.380 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 17:01:35.380788 | orchestrator | 17:01:35.380 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 17:01:35.380821 | orchestrator | 17:01:35.380 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 17:01:35.380856 | orchestrator | 17:01:35.380 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 17:01:35.380890 | orchestrator | 17:01:35.380 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.380913 | orchestrator | 17:01:35.380 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 17:01:35.380932 | orchestrator | 17:01:35.380 STDOUT terraform:  + config_drive = true 2025-08-29 17:01:35.380976 | orchestrator | 17:01:35.380 STDOUT terraform:  + created = (known after apply) 2025-08-29 17:01:35.381023 | orchestrator | 17:01:35.380 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 17:01:35.381055 | orchestrator | 17:01:35.381 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 17:01:35.381117 | orchestrator | 17:01:35.381 STDOUT terraform:  + force_delete = false 2025-08-29 17:01:35.381124 | orchestrator | 17:01:35.381 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 17:01:35.381147 | orchestrator | 17:01:35.381 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.381180 | orchestrator | 17:01:35.381 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 17:01:35.381214 | orchestrator | 17:01:35.381 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 17:01:35.381239 | orchestrator | 17:01:35.381 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 17:01:35.381269 | orchestrator | 17:01:35.381 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 17:01:35.381293 | orchestrator | 17:01:35.381 STDOUT terraform:  + power_state = "active" 2025-08-29 17:01:35.381328 | orchestrator | 17:01:35.381 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.381361 | orchestrator | 17:01:35.381 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 17:01:35.381384 | orchestrator | 17:01:35.381 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 17:01:35.381417 | orchestrator | 17:01:35.381 STDOUT terraform:  + updated = (known after apply) 2025-08-29 17:01:35.381465 | orchestrator | 17:01:35.381 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 17:01:35.381472 | orchestrator | 17:01:35.381 STDOUT terraform:  + block_device { 2025-08-29 17:01:35.381500 | orchestrator | 17:01:35.381 STDOUT terraform:  + boot_index = 0 2025-08-29 17:01:35.381529 | orchestrator | 17:01:35.381 STDOUT terraform:  + delete_on_termination = false 2025-08-29 17:01:35.381559 | orchestrator | 17:01:35.381 STDOUT terraform:  + destination_type = "volume" 2025-08-29 17:01:35.381586 | orchestrator | 17:01:35.381 STDOUT terraform:  + multiattach = false 2025-08-29 17:01:35.381615 | orchestrator | 17:01:35.381 STDOUT terraform:  + source_type = "volume" 2025-08-29 17:01:35.381651 | orchestrator | 17:01:35.381 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.381658 | orchestrator | 17:01:35.381 STDOUT terraform:  } 2025-08-29 17:01:35.381664 | orchestrator | 17:01:35.381 STDOUT terraform:  + network { 2025-08-29 17:01:35.381688 | orchestrator | 17:01:35.381 STDOUT terraform:  + access_network = false 2025-08-29 17:01:35.381718 | orchestrator | 17:01:35.381 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 17:01:35.381748 | orchestrator | 17:01:35.381 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 17:01:35.381778 | orchestrator | 17:01:35.381 STDOUT terraform:  + mac = (known after apply) 2025-08-29 17:01:35.381809 | orchestrator | 17:01:35.381 STDOUT terraform:  + name = (known after apply) 2025-08-29 17:01:35.381844 | orchestrator | 17:01:35.381 STDOUT terraform:  + port = (known after apply) 2025-08-29 17:01:35.381872 | orchestrator | 17:01:35.381 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 17:01:35.381878 | orchestrator | 17:01:35.381 STDOUT terraform:  } 2025-08-29 17:01:35.381884 | orchestrator | 17:01:35.381 STDOUT terraform:  } 2025-08-29 17:01:35.381925 | orchestrator | 17:01:35.381 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 17:01:35.381958 | orchestrator | 17:01:35.381 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 17:01:35.381998 | orchestrator | 17:01:35.381 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 17:01:35.382027 | orchestrator | 17:01:35.381 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.382050 | orchestrator | 17:01:35.382 STDOUT terraform:  + name = "testbed" 2025-08-29 17:01:35.382076 | orchestrator | 17:01:35.382 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 17:01:35.382096 | orchestrator | 17:01:35.382 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 17:01:35.382128 | orchestrator | 17:01:35.382 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.382151 | orchestrator | 17:01:35.382 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 17:01:35.382243 | orchestrator | 17:01:35.382 STDOUT terraform:  } 2025-08-29 17:01:35.382258 | orchestrator | 17:01:35.382 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 17:01:35.382266 | orchestrator | 17:01:35.382 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.382274 | orchestrator | 17:01:35.382 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.382307 | orchestrator | 17:01:35.382 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.382328 | orchestrator | 17:01:35.382 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.382355 | orchestrator | 17:01:35.382 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.382383 | orchestrator | 17:01:35.382 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.382393 | orchestrator | 17:01:35.382 STDOUT terraform:  } 2025-08-29 17:01:35.382443 | orchestrator | 17:01:35.382 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 17:01:35.382488 | orchestrator | 17:01:35.382 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.382515 | orchestrator | 17:01:35.382 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.382543 | orchestrator | 17:01:35.382 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.382569 | orchestrator | 17:01:35.382 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.382596 | orchestrator | 17:01:35.382 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.382624 | orchestrator | 17:01:35.382 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.382633 | orchestrator | 17:01:35.382 STDOUT terraform:  } 2025-08-29 17:01:35.382681 | orchestrator | 17:01:35.382 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 17:01:35.382728 | orchestrator | 17:01:35.382 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.382756 | orchestrator | 17:01:35.382 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.382783 | orchestrator | 17:01:35.382 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.382810 | orchestrator | 17:01:35.382 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.382837 | orchestrator | 17:01:35.382 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.382864 | orchestrator | 17:01:35.382 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.382874 | orchestrator | 17:01:35.382 STDOUT terraform:  } 2025-08-29 17:01:35.382922 | orchestrator | 17:01:35.382 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 17:01:35.382979 | orchestrator | 17:01:35.382 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.383005 | orchestrator | 17:01:35.382 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.383032 | orchestrator | 17:01:35.382 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.383059 | orchestrator | 17:01:35.383 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.383088 | orchestrator | 17:01:35.383 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.383114 | orchestrator | 17:01:35.383 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.383125 | orchestrator | 17:01:35.383 STDOUT terraform:  } 2025-08-29 17:01:35.383171 | orchestrator | 17:01:35.383 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 17:01:35.383219 | orchestrator | 17:01:35.383 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.383245 | orchestrator | 17:01:35.383 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.383272 | orchestrator | 17:01:35.383 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.383382 | orchestrator | 17:01:35.383 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.383391 | orchestrator | 17:01:35.383 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.383398 | orchestrator | 17:01:35.383 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.383404 | orchestrator | 17:01:35.383 STDOUT terraform:  } 2025-08-29 17:01:35.383413 | orchestrator | 17:01:35.383 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 17:01:35.383444 | orchestrator | 17:01:35.383 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.383472 | orchestrator | 17:01:35.383 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.383497 | orchestrator | 17:01:35.383 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.383524 | orchestrator | 17:01:35.383 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.383551 | orchestrator | 17:01:35.383 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.383578 | orchestrator | 17:01:35.383 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.383585 | orchestrator | 17:01:35.383 STDOUT terraform:  } 2025-08-29 17:01:35.383635 | orchestrator | 17:01:35.383 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 17:01:35.383682 | orchestrator | 17:01:35.383 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.383709 | orchestrator | 17:01:35.383 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.383736 | orchestrator | 17:01:35.383 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.383763 | orchestrator | 17:01:35.383 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.383790 | orchestrator | 17:01:35.383 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.383819 | orchestrator | 17:01:35.383 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.383829 | orchestrator | 17:01:35.383 STDOUT terraform:  } 2025-08-29 17:01:35.383876 | orchestrator | 17:01:35.383 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 17:01:35.383920 | orchestrator | 17:01:35.383 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.383949 | orchestrator | 17:01:35.383 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.383986 | orchestrator | 17:01:35.383 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.384012 | orchestrator | 17:01:35.383 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.384039 | orchestrator | 17:01:35.384 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.384066 | orchestrator | 17:01:35.384 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.384073 | orchestrator | 17:01:35.384 STDOUT terraform:  } 2025-08-29 17:01:35.384122 | orchestrator | 17:01:35.384 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 17:01:35.384169 | orchestrator | 17:01:35.384 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 17:01:35.384202 | orchestrator | 17:01:35.384 STDOUT terraform:  + device = (known after apply) 2025-08-29 17:01:35.384223 | orchestrator | 17:01:35.384 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.384250 | orchestrator | 17:01:35.384 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 17:01:35.384277 | orchestrator | 17:01:35.384 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.384305 | orchestrator | 17:01:35.384 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 17:01:35.384312 | orchestrator | 17:01:35.384 STDOUT terraform:  } 2025-08-29 17:01:35.384488 | orchestrator | 17:01:35.384 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 17:01:35.384500 | orchestrator | 17:01:35.384 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 17:01:35.384504 | orchestrator | 17:01:35.384 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 17:01:35.384510 | orchestrator | 17:01:35.384 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 17:01:35.384540 | orchestrator | 17:01:35.384 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.384568 | orchestrator | 17:01:35.384 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 17:01:35.384597 | orchestrator | 17:01:35.384 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.384614 | orchestrator | 17:01:35.384 STDOUT terraform:  } 2025-08-29 17:01:35.384659 | orchestrator | 17:01:35.384 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 17:01:35.384707 | orchestrator | 17:01:35.384 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 17:01:35.384730 | orchestrator | 17:01:35.384 STDOUT terraform:  + address = (known after apply) 2025-08-29 17:01:35.384755 | orchestrator | 17:01:35.384 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.384781 | orchestrator | 17:01:35.384 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 17:01:35.384806 | orchestrator | 17:01:35.384 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.384831 | orchestrator | 17:01:35.384 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 17:01:35.384855 | orchestrator | 17:01:35.384 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.384875 | orchestrator | 17:01:35.384 STDOUT terraform:  + pool = "public" 2025-08-29 17:01:35.384899 | orchestrator | 17:01:35.384 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 17:01:35.384924 | orchestrator | 17:01:35.384 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.384947 | orchestrator | 17:01:35.384 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.384989 | orchestrator | 17:01:35.384 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.384996 | orchestrator | 17:01:35.384 STDOUT terraform:  } 2025-08-29 17:01:35.385042 | orchestrator | 17:01:35.384 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 17:01:35.385084 | orchestrator | 17:01:35.385 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 17:01:35.385120 | orchestrator | 17:01:35.385 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.385871 | orchestrator | 17:01:35.385 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.385887 | orchestrator | 17:01:35.385 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 17:01:35.385910 | orchestrator | 17:01:35.385 STDOUT terraform:  + "nova", 2025-08-29 17:01:35.385916 | orchestrator | 17:01:35.385 STDOUT terraform:  ] 2025-08-29 17:01:35.385977 | orchestrator | 17:01:35.385 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 17:01:35.386007 | orchestrator | 17:01:35.385 STDOUT terraform:  + external = (known after apply) 2025-08-29 17:01:35.386072 | orchestrator | 17:01:35.386 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.386103 | orchestrator | 17:01:35.386 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 17:01:35.386162 | orchestrator | 17:01:35.386 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 17:01:35.386194 | orchestrator | 17:01:35.386 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.386231 | orchestrator | 17:01:35.386 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.386266 | orchestrator | 17:01:35.386 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.386311 | orchestrator | 17:01:35.386 STDOUT terraform:  + shared = (known after apply) 2025-08-29 17:01:35.386340 | orchestrator | 17:01:35.386 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.386377 | orchestrator | 17:01:35.386 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 17:01:35.386399 | orchestrator | 17:01:35.386 STDOUT terraform:  + segments (known after apply) 2025-08-29 17:01:35.386406 | orchestrator | 17:01:35.386 STDOUT terraform:  } 2025-08-29 17:01:35.386455 | orchestrator | 17:01:35.386 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 17:01:35.386499 | orchestrator | 17:01:35.386 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 17:01:35.386533 | orchestrator | 17:01:35.386 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.386568 | orchestrator | 17:01:35.386 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.386603 | orchestrator | 17:01:35.386 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.386638 | orchestrator | 17:01:35.386 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.386672 | orchestrator | 17:01:35.386 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.386708 | orchestrator | 17:01:35.386 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.386742 | orchestrator | 17:01:35.386 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.386778 | orchestrator | 17:01:35.386 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.386812 | orchestrator | 17:01:35.386 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.386848 | orchestrator | 17:01:35.386 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.386884 | orchestrator | 17:01:35.386 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.386995 | orchestrator | 17:01:35.386 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.387002 | orchestrator | 17:01:35.386 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.387008 | orchestrator | 17:01:35.386 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.387038 | orchestrator | 17:01:35.386 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.387073 | orchestrator | 17:01:35.387 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.387093 | orchestrator | 17:01:35.387 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.387121 | orchestrator | 17:01:35.387 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.387128 | orchestrator | 17:01:35.387 STDOUT terraform:  } 2025-08-29 17:01:35.387150 | orchestrator | 17:01:35.387 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.387178 | orchestrator | 17:01:35.387 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.387184 | orchestrator | 17:01:35.387 STDOUT terraform:  } 2025-08-29 17:01:35.387212 | orchestrator | 17:01:35.387 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.387219 | orchestrator | 17:01:35.387 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.387246 | orchestrator | 17:01:35.387 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 17:01:35.387274 | orchestrator | 17:01:35.387 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.387281 | orchestrator | 17:01:35.387 STDOUT terraform:  } 2025-08-29 17:01:35.387297 | orchestrator | 17:01:35.387 STDOUT terraform:  } 2025-08-29 17:01:35.387341 | orchestrator | 17:01:35.387 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 17:01:35.387386 | orchestrator | 17:01:35.387 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 17:01:35.387422 | orchestrator | 17:01:35.387 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.387457 | orchestrator | 17:01:35.387 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.387491 | orchestrator | 17:01:35.387 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.387528 | orchestrator | 17:01:35.387 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.387565 | orchestrator | 17:01:35.387 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.387595 | orchestrator | 17:01:35.387 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.387629 | orchestrator | 17:01:35.387 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.387664 | orchestrator | 17:01:35.387 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.387699 | orchestrator | 17:01:35.387 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.387739 | orchestrator | 17:01:35.387 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.387768 | orchestrator | 17:01:35.387 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.387802 | orchestrator | 17:01:35.387 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.387836 | orchestrator | 17:01:35.387 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.387871 | orchestrator | 17:01:35.387 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.387905 | orchestrator | 17:01:35.387 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.387940 | orchestrator | 17:01:35.387 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.387960 | orchestrator | 17:01:35.387 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.388013 | orchestrator | 17:01:35.387 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.388020 | orchestrator | 17:01:35.387 STDOUT terraform:  } 2025-08-29 17:01:35.388025 | orchestrator | 17:01:35.388 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.388114 | orchestrator | 17:01:35.388 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 17:01:35.388121 | orchestrator | 17:01:35.388 STDOUT terraform:  } 2025-08-29 17:01:35.388126 | orchestrator | 17:01:35.388 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.388130 | orchestrator | 17:01:35.388 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.388134 | orchestrator | 17:01:35.388 STDOUT terraform:  } 2025-08-29 17:01:35.388140 | orchestrator | 17:01:35.388 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.388144 | orchestrator | 17:01:35.388 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 17:01:35.388149 | orchestrator | 17:01:35.388 STDOUT terraform:  } 2025-08-29 17:01:35.388175 | orchestrator | 17:01:35.388 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.388182 | orchestrator | 17:01:35.388 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.388210 | orchestrator | 17:01:35.388 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 17:01:35.388239 | orchestrator | 17:01:35.388 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.388245 | orchestrator | 17:01:35.388 STDOUT terraform:  } 2025-08-29 17:01:35.388261 | orchestrator | 17:01:35.388 STDOUT terraform:  } 2025-08-29 17:01:35.388306 | orchestrator | 17:01:35.388 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 17:01:35.388349 | orchestrator | 17:01:35.388 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 17:01:35.388384 | orchestrator | 17:01:35.388 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.388419 | orchestrator | 17:01:35.388 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.388457 | orchestrator | 17:01:35.388 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.388494 | orchestrator | 17:01:35.388 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.388543 | orchestrator | 17:01:35.388 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.388579 | orchestrator | 17:01:35.388 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.388614 | orchestrator | 17:01:35.388 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.388652 | orchestrator | 17:01:35.388 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.388687 | orchestrator | 17:01:35.388 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.388722 | orchestrator | 17:01:35.388 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.388755 | orchestrator | 17:01:35.388 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.388787 | orchestrator | 17:01:35.388 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.388821 | orchestrator | 17:01:35.388 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.388857 | orchestrator | 17:01:35.388 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.388891 | orchestrator | 17:01:35.388 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.388926 | orchestrator | 17:01:35.388 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.388945 | orchestrator | 17:01:35.388 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.388984 | orchestrator | 17:01:35.388 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.388991 | orchestrator | 17:01:35.388 STDOUT terraform:  } 2025-08-29 17:01:35.389013 | orchestrator | 17:01:35.388 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.389043 | orchestrator | 17:01:35.389 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 17:01:35.389049 | orchestrator | 17:01:35.389 STDOUT terraform:  } 2025-08-29 17:01:35.389071 | orchestrator | 17:01:35.389 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.389098 | orchestrator | 17:01:35.389 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.389105 | orchestrator | 17:01:35.389 STDOUT terraform:  } 2025-08-29 17:01:35.389126 | orchestrator | 17:01:35.389 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.389227 | orchestrator | 17:01:35.389 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 17:01:35.389235 | orchestrator | 17:01:35.389 STDOUT terraform:  } 2025-08-29 17:01:35.389239 | orchestrator | 17:01:35.389 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.389244 | orchestrator | 17:01:35.389 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.389254 | orchestrator | 17:01:35.389 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 17:01:35.389260 | orchestrator | 17:01:35.389 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.389264 | orchestrator | 17:01:35.389 STDOUT terraform:  } 2025-08-29 17:01:35.389268 | orchestrator | 17:01:35.389 STDOUT terraform:  } 2025-08-29 17:01:35.389298 | orchestrator | 17:01:35.389 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 17:01:35.389341 | orchestrator | 17:01:35.389 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 17:01:35.389376 | orchestrator | 17:01:35.389 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.389410 | orchestrator | 17:01:35.389 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.389444 | orchestrator | 17:01:35.389 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.389479 | orchestrator | 17:01:35.389 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.389513 | orchestrator | 17:01:35.389 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.389547 | orchestrator | 17:01:35.389 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.389582 | orchestrator | 17:01:35.389 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.389617 | orchestrator | 17:01:35.389 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.389653 | orchestrator | 17:01:35.389 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.389687 | orchestrator | 17:01:35.389 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.389730 | orchestrator | 17:01:35.389 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.389765 | orchestrator | 17:01:35.389 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.389800 | orchestrator | 17:01:35.389 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.389834 | orchestrator | 17:01:35.389 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.389869 | orchestrator | 17:01:35.389 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.389903 | orchestrator | 17:01:35.389 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.389921 | orchestrator | 17:01:35.389 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.389946 | orchestrator | 17:01:35.389 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.389952 | orchestrator | 17:01:35.389 STDOUT terraform:  } 2025-08-29 17:01:35.389985 | orchestrator | 17:01:35.389 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.390030 | orchestrator | 17:01:35.389 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 17:01:35.390037 | orchestrator | 17:01:35.390 STDOUT terraform:  } 2025-08-29 17:01:35.390059 | orchestrator | 17:01:35.390 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.390086 | orchestrator | 17:01:35.390 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.390097 | orchestrator | 17:01:35.390 STDOUT terraform:  } 2025-08-29 17:01:35.390116 | orchestrator | 17:01:35.390 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.390142 | orchestrator | 17:01:35.390 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 17:01:35.390149 | orchestrator | 17:01:35.390 STDOUT terraform:  } 2025-08-29 17:01:35.390175 | orchestrator | 17:01:35.390 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.390181 | orchestrator | 17:01:35.390 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.390209 | orchestrator | 17:01:35.390 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 17:01:35.390240 | orchestrator | 17:01:35.390 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.390248 | orchestrator | 17:01:35.390 STDOUT terraform:  } 2025-08-29 17:01:35.390256 | orchestrator | 17:01:35.390 STDOUT terraform:  } 2025-08-29 17:01:35.390338 | orchestrator | 17:01:35.390 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 17:01:35.390347 | orchestrator | 17:01:35.390 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 17:01:35.390373 | orchestrator | 17:01:35.390 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.390405 | orchestrator | 17:01:35.390 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.390439 | orchestrator | 17:01:35.390 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.390473 | orchestrator | 17:01:35.390 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.390508 | orchestrator | 17:01:35.390 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.390543 | orchestrator | 17:01:35.390 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.390584 | orchestrator | 17:01:35.390 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.390613 | orchestrator | 17:01:35.390 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.390650 | orchestrator | 17:01:35.390 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.393820 | orchestrator | 17:01:35.390 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.393862 | orchestrator | 17:01:35.393 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.393874 | orchestrator | 17:01:35.393 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.393881 | orchestrator | 17:01:35.393 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.394238 | orchestrator | 17:01:35.393 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.394253 | orchestrator | 17:01:35.393 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.394260 | orchestrator | 17:01:35.393 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.394352 | orchestrator | 17:01:35.394 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.394358 | orchestrator | 17:01:35.394 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.394373 | orchestrator | 17:01:35.394 STDOUT terraform:  } 2025-08-29 17:01:35.394380 | orchestrator | 17:01:35.394 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.394386 | orchestrator | 17:01:35.394 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 17:01:35.394394 | orchestrator | 17:01:35.394 STDOUT terraform:  } 2025-08-29 17:01:35.394400 | orchestrator | 17:01:35.394 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.394407 | orchestrator | 17:01:35.394 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.394414 | orchestrator | 17:01:35.394 STDOUT terraform:  } 2025-08-29 17:01:35.394422 | orchestrator | 17:01:35.394 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.394479 | orchestrator | 17:01:35.394 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 17:01:35.394486 | orchestrator | 17:01:35.394 STDOUT terraform:  } 2025-08-29 17:01:35.394490 | orchestrator | 17:01:35.394 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.394496 | orchestrator | 17:01:35.394 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.394536 | orchestrator | 17:01:35.394 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 17:01:35.394542 | orchestrator | 17:01:35.394 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.394547 | orchestrator | 17:01:35.394 STDOUT terraform:  } 2025-08-29 17:01:35.394552 | orchestrator | 17:01:35.394 STDOUT terraform:  } 2025-08-29 17:01:35.394607 | orchestrator | 17:01:35.394 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 17:01:35.394650 | orchestrator | 17:01:35.394 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 17:01:35.394941 | orchestrator | 17:01:35.394 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.394951 | orchestrator | 17:01:35.394 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.394955 | orchestrator | 17:01:35.394 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.394958 | orchestrator | 17:01:35.394 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.394962 | orchestrator | 17:01:35.394 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.395007 | orchestrator | 17:01:35.394 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.395011 | orchestrator | 17:01:35.394 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.395015 | orchestrator | 17:01:35.394 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.395019 | orchestrator | 17:01:35.394 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.395025 | orchestrator | 17:01:35.394 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.395029 | orchestrator | 17:01:35.394 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.395040 | orchestrator | 17:01:35.394 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.396098 | orchestrator | 17:01:35.395 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.396121 | orchestrator | 17:01:35.395 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.396125 | orchestrator | 17:01:35.395 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.396129 | orchestrator | 17:01:35.395 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.396133 | orchestrator | 17:01:35.395 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396136 | orchestrator | 17:01:35.395 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.396141 | orchestrator | 17:01:35.395 STDOUT terraform:  } 2025-08-29 17:01:35.396145 | orchestrator | 17:01:35.395 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396148 | orchestrator | 17:01:35.395 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 17:01:35.396152 | orchestrator | 17:01:35.395 STDOUT terraform:  } 2025-08-29 17:01:35.396156 | orchestrator | 17:01:35.395 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396160 | orchestrator | 17:01:35.395 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.396163 | orchestrator | 17:01:35.395 STDOUT terraform:  } 2025-08-29 17:01:35.396167 | orchestrator | 17:01:35.395 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396171 | orchestrator | 17:01:35.395 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 17:01:35.396175 | orchestrator | 17:01:35.395 STDOUT terraform:  } 2025-08-29 17:01:35.396178 | orchestrator | 17:01:35.395 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.396182 | orchestrator | 17:01:35.395 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.396186 | orchestrator | 17:01:35.395 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 17:01:35.396190 | orchestrator | 17:01:35.395 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.396193 | orchestrator | 17:01:35.395 STDOUT terraform:  } 2025-08-29 17:01:35.396197 | orchestrator | 17:01:35.395 STDOUT terraform:  } 2025-08-29 17:01:35.396201 | orchestrator | 17:01:35.395 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 17:01:35.396206 | orchestrator | 17:01:35.395 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 17:01:35.396209 | orchestrator | 17:01:35.395 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.396213 | orchestrator | 17:01:35.395 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 17:01:35.396217 | orchestrator | 17:01:35.395 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 17:01:35.396221 | orchestrator | 17:01:35.395 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.396224 | orchestrator | 17:01:35.395 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 17:01:35.396228 | orchestrator | 17:01:35.395 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 17:01:35.396232 | orchestrator | 17:01:35.395 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 17:01:35.396244 | orchestrator | 17:01:35.395 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 17:01:35.396248 | orchestrator | 17:01:35.395 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.396252 | orchestrator | 17:01:35.395 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 17:01:35.396255 | orchestrator | 17:01:35.395 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.396259 | orchestrator | 17:01:35.395 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 17:01:35.396263 | orchestrator | 17:01:35.396 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 17:01:35.396267 | orchestrator | 17:01:35.396 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.396274 | orchestrator | 17:01:35.396 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 17:01:35.396278 | orchestrator | 17:01:35.396 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.396282 | orchestrator | 17:01:35.396 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396286 | orchestrator | 17:01:35.396 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 17:01:35.396289 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396293 | orchestrator | 17:01:35.396 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396297 | orchestrator | 17:01:35.396 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 17:01:35.396301 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396304 | orchestrator | 17:01:35.396 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396308 | orchestrator | 17:01:35.396 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 17:01:35.396312 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396318 | orchestrator | 17:01:35.396 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 17:01:35.396322 | orchestrator | 17:01:35.396 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 17:01:35.396327 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396346 | orchestrator | 17:01:35.396 STDOUT terraform:  + binding (known after apply) 2025-08-29 17:01:35.396353 | orchestrator | 17:01:35.396 STDOUT terraform:  + fixed_ip { 2025-08-29 17:01:35.396379 | orchestrator | 17:01:35.396 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 17:01:35.396407 | orchestrator | 17:01:35.396 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.396413 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396428 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396475 | orchestrator | 17:01:35.396 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 17:01:35.396521 | orchestrator | 17:01:35.396 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 17:01:35.396541 | orchestrator | 17:01:35.396 STDOUT terraform:  + force_destroy = false 2025-08-29 17:01:35.396570 | orchestrator | 17:01:35.396 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.396597 | orchestrator | 17:01:35.396 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 17:01:35.396625 | orchestrator | 17:01:35.396 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.396652 | orchestrator | 17:01:35.396 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 17:01:35.396680 | orchestrator | 17:01:35.396 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 17:01:35.396686 | orchestrator | 17:01:35.396 STDOUT terraform:  } 2025-08-29 17:01:35.396723 | orchestrator | 17:01:35.396 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 17:01:35.396757 | orchestrator | 17:01:35.396 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 17:01:35.396792 | orchestrator | 17:01:35.396 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 17:01:35.396829 | orchestrator | 17:01:35.396 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.396852 | orchestrator | 17:01:35.396 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 17:01:35.396863 | orchestrator | 17:01:35.396 STDOUT terraform:  + "nova", 2025-08-29 17:01:35.396871 | orchestrator | 17:01:35.396 STDOUT terraform:  ] 2025-08-29 17:01:35.396907 | orchestrator | 17:01:35.396 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 17:01:35.396942 | orchestrator | 17:01:35.396 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 17:01:35.397012 | orchestrator | 17:01:35.396 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 17:01:35.397056 | orchestrator | 17:01:35.397 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 17:01:35.397079 | orchestrator | 17:01:35.397 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.397107 | orchestrator | 17:01:35.397 STDOUT terraform:  + name = "testbed" 2025-08-29 17:01:35.397218 | orchestrator | 17:01:35.397 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.397225 | orchestrator | 17:01:35.397 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.397232 | orchestrator | 17:01:35.397 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 17:01:35.397236 | orchestrator | 17:01:35.397 STDOUT terraform:  } 2025-08-29 17:01:35.397288 | orchestrator | 17:01:35.397 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 17:01:35.397339 | orchestrator | 17:01:35.397 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 17:01:35.397363 | orchestrator | 17:01:35.397 STDOUT terraform:  + description = "ssh" 2025-08-29 17:01:35.397390 | orchestrator | 17:01:35.397 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.397415 | orchestrator | 17:01:35.397 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.397451 | orchestrator | 17:01:35.397 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.397474 | orchestrator | 17:01:35.397 STDOUT terraform:  + port_range_max = 22 2025-08-29 17:01:35.397496 | orchestrator | 17:01:35.397 STDOUT terraform:  + port_range_min = 22 2025-08-29 17:01:35.397521 | orchestrator | 17:01:35.397 STDOUT terraform:  + protocol = "tcp" 2025-08-29 17:01:35.397556 | orchestrator | 17:01:35.397 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.397590 | orchestrator | 17:01:35.397 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.397625 | orchestrator | 17:01:35.397 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.397653 | orchestrator | 17:01:35.397 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.397688 | orchestrator | 17:01:35.397 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.397725 | orchestrator | 17:01:35.397 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.397731 | orchestrator | 17:01:35.397 STDOUT terraform:  } 2025-08-29 17:01:35.397785 | orchestrator | 17:01:35.397 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 17:01:35.397836 | orchestrator | 17:01:35.397 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 17:01:35.397865 | orchestrator | 17:01:35.397 STDOUT terraform:  + description = "wireguard" 2025-08-29 17:01:35.397905 | orchestrator | 17:01:35.397 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.397939 | orchestrator | 17:01:35.397 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.398004 | orchestrator | 17:01:35.397 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.398047 | orchestrator | 17:01:35.398 STDOUT terraform:  + port_range_max = 51820 2025-08-29 17:01:35.398071 | orchestrator | 17:01:35.398 STDOUT terraform:  + port_range_min = 51820 2025-08-29 17:01:35.398095 | orchestrator | 17:01:35.398 STDOUT terraform:  + protocol = "udp" 2025-08-29 17:01:35.398131 | orchestrator | 17:01:35.398 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.398165 | orchestrator | 17:01:35.398 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.398216 | orchestrator | 17:01:35.398 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.398246 | orchestrator | 17:01:35.398 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.398331 | orchestrator | 17:01:35.398 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.398338 | orchestrator | 17:01:35.398 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.398342 | orchestrator | 17:01:35.398 STDOUT terraform:  } 2025-08-29 17:01:35.398378 | orchestrator | 17:01:35.398 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 17:01:35.398429 | orchestrator | 17:01:35.398 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 17:01:35.398457 | orchestrator | 17:01:35.398 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.398483 | orchestrator | 17:01:35.398 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.398518 | orchestrator | 17:01:35.398 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.398543 | orchestrator | 17:01:35.398 STDOUT terraform:  + protocol = "tcp" 2025-08-29 17:01:35.398583 | orchestrator | 17:01:35.398 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.398609 | orchestrator | 17:01:35.398 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.398642 | orchestrator | 17:01:35.398 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.398678 | orchestrator | 17:01:35.398 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 17:01:35.398708 | orchestrator | 17:01:35.398 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.398744 | orchestrator | 17:01:35.398 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.398750 | orchestrator | 17:01:35.398 STDOUT terraform:  } 2025-08-29 17:01:35.398806 | orchestrator | 17:01:35.398 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 17:01:35.398857 | orchestrator | 17:01:35.398 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 17:01:35.398884 | orchestrator | 17:01:35.398 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.398909 | orchestrator | 17:01:35.398 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.398943 | orchestrator | 17:01:35.398 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.399016 | orchestrator | 17:01:35.398 STDOUT terraform:  + protocol = "udp" 2025-08-29 17:01:35.399023 | orchestrator | 17:01:35.398 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.399040 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.399073 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.399107 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 17:01:35.399140 | orchestrator | 17:01:35.399 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.399202 | orchestrator | 17:01:35.399 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.399209 | orchestrator | 17:01:35.399 STDOUT terraform:  } 2025-08-29 17:01:35.399261 | orchestrator | 17:01:35.399 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 17:01:35.399313 | orchestrator | 17:01:35.399 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 17:01:35.399340 | orchestrator | 17:01:35.399 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.399436 | orchestrator | 17:01:35.399 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.399443 | orchestrator | 17:01:35.399 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.399447 | orchestrator | 17:01:35.399 STDOUT terraform:  + protocol = "icmp" 2025-08-29 17:01:35.399463 | orchestrator | 17:01:35.399 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.399482 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.399517 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.399545 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.399579 | orchestrator | 17:01:35.399 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.399615 | orchestrator | 17:01:35.399 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.399629 | orchestrator | 17:01:35.399 STDOUT terraform:  } 2025-08-29 17:01:35.399678 | orchestrator | 17:01:35.399 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 17:01:35.399728 | orchestrator | 17:01:35.399 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 17:01:35.399756 | orchestrator | 17:01:35.399 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.399780 | orchestrator | 17:01:35.399 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.399816 | orchestrator | 17:01:35.399 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.399839 | orchestrator | 17:01:35.399 STDOUT terraform:  + protocol = "tcp" 2025-08-29 17:01:35.399875 | orchestrator | 17:01:35.399 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.399910 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.399945 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.399993 | orchestrator | 17:01:35.399 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.400027 | orchestrator | 17:01:35.399 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.400063 | orchestrator | 17:01:35.400 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.400078 | orchestrator | 17:01:35.400 STDOUT terraform:  } 2025-08-29 17:01:35.400129 | orchestrator | 17:01:35.400 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 17:01:35.400179 | orchestrator | 17:01:35.400 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 17:01:35.400209 | orchestrator | 17:01:35.400 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.400233 | orchestrator | 17:01:35.400 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.400269 | orchestrator | 17:01:35.400 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.400293 | orchestrator | 17:01:35.400 STDOUT terraform:  + protocol = "udp" 2025-08-29 17:01:35.400329 | orchestrator | 17:01:35.400 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.400363 | orchestrator | 17:01:35.400 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.400399 | orchestrator | 17:01:35.400 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.400430 | orchestrator | 17:01:35.400 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.400465 | orchestrator | 17:01:35.400 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.400537 | orchestrator | 17:01:35.400 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.400543 | orchestrator | 17:01:35.400 STDOUT terraform:  } 2025-08-29 17:01:35.400557 | orchestrator | 17:01:35.400 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 17:01:35.400606 | orchestrator | 17:01:35.400 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 17:01:35.400633 | orchestrator | 17:01:35.400 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.400657 | orchestrator | 17:01:35.400 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.400693 | orchestrator | 17:01:35.400 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.400718 | orchestrator | 17:01:35.400 STDOUT terraform:  + protocol = "icmp" 2025-08-29 17:01:35.400754 | orchestrator | 17:01:35.400 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.400788 | orchestrator | 17:01:35.400 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.400823 | orchestrator | 17:01:35.400 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.400853 | orchestrator | 17:01:35.400 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.400888 | orchestrator | 17:01:35.400 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.400924 | orchestrator | 17:01:35.400 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.400930 | orchestrator | 17:01:35.400 STDOUT terraform:  } 2025-08-29 17:01:35.400992 | orchestrator | 17:01:35.400 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 17:01:35.401040 | orchestrator | 17:01:35.400 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 17:01:35.401065 | orchestrator | 17:01:35.401 STDOUT terraform:  + description = "vrrp" 2025-08-29 17:01:35.401092 | orchestrator | 17:01:35.401 STDOUT terraform:  + direction = "ingress" 2025-08-29 17:01:35.401117 | orchestrator | 17:01:35.401 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 17:01:35.401177 | orchestrator | 17:01:35.401 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.401185 | orchestrator | 17:01:35.401 STDOUT terraform:  + protocol = "112" 2025-08-29 17:01:35.401209 | orchestrator | 17:01:35.401 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.401243 | orchestrator | 17:01:35.401 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 17:01:35.401278 | orchestrator | 17:01:35.401 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 17:01:35.401305 | orchestrator | 17:01:35.401 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 17:01:35.401340 | orchestrator | 17:01:35.401 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 17:01:35.401375 | orchestrator | 17:01:35.401 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.401381 | orchestrator | 17:01:35.401 STDOUT terraform:  } 2025-08-29 17:01:35.401432 | orchestrator | 17:01:35.401 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 17:01:35.401479 | orchestrator | 17:01:35.401 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 17:01:35.401508 | orchestrator | 17:01:35.401 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.401541 | orchestrator | 17:01:35.401 STDOUT terraform:  + description = "management security group" 2025-08-29 17:01:35.401569 | orchestrator | 17:01:35.401 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.401639 | orchestrator | 17:01:35.401 STDOUT terraform:  + name = "testbed-management" 2025-08-29 17:01:35.401645 | orchestrator | 17:01:35.401 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.401651 | orchestrator | 17:01:35.401 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 17:01:35.401669 | orchestrator | 17:01:35.401 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.401676 | orchestrator | 17:01:35.401 STDOUT terraform:  } 2025-08-29 17:01:35.401722 | orchestrator | 17:01:35.401 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 17:01:35.401767 | orchestrator | 17:01:35.401 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 17:01:35.401796 | orchestrator | 17:01:35.401 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.401820 | orchestrator | 17:01:35.401 STDOUT terraform:  + description = "node security group" 2025-08-29 17:01:35.401847 | orchestrator | 17:01:35.401 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.401870 | orchestrator | 17:01:35.401 STDOUT terraform:  + name = "testbed-node" 2025-08-29 17:01:35.401898 | orchestrator | 17:01:35.401 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.401926 | orchestrator | 17:01:35.401 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 17:01:35.401952 | orchestrator | 17:01:35.401 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.401958 | orchestrator | 17:01:35.401 STDOUT terraform:  } 2025-08-29 17:01:35.402040 | orchestrator | 17:01:35.401 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 17:01:35.402073 | orchestrator | 17:01:35.402 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 17:01:35.402104 | orchestrator | 17:01:35.402 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 17:01:35.402132 | orchestrator | 17:01:35.402 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 17:01:35.402150 | orchestrator | 17:01:35.402 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 17:01:35.402166 | orchestrator | 17:01:35.402 STDOUT terraform:  + "8.8.8.8", 2025-08-29 17:01:35.402181 | orchestrator | 17:01:35.402 STDOUT terraform:  + "9.9.9.9", 2025-08-29 17:01:35.402195 | orchestrator | 17:01:35.402 STDOUT terraform:  ] 2025-08-29 17:01:35.402210 | orchestrator | 17:01:35.402 STDOUT terraform:  + enable_dhcp = true 2025-08-29 17:01:35.402239 | orchestrator | 17:01:35.402 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 17:01:35.402268 | orchestrator | 17:01:35.402 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.402288 | orchestrator | 17:01:35.402 STDOUT terraform:  + ip_version = 4 2025-08-29 17:01:35.402317 | orchestrator | 17:01:35.402 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 17:01:35.402348 | orchestrator | 17:01:35.402 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 17:01:35.402386 | orchestrator | 17:01:35.402 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 17:01:35.402415 | orchestrator | 17:01:35.402 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 17:01:35.402434 | orchestrator | 17:01:35.402 STDOUT terraform:  + no_gateway = false 2025-08-29 17:01:35.402461 | orchestrator | 17:01:35.402 STDOUT terraform:  + region = (known after apply) 2025-08-29 17:01:35.402491 | orchestrator | 17:01:35.402 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 17:01:35.402520 | orchestrator | 17:01:35.402 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 17:01:35.402541 | orchestrator | 17:01:35.402 STDOUT terraform:  + allocation_pool { 2025-08-29 17:01:35.402565 | orchestrator | 17:01:35.402 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 17:01:35.402589 | orchestrator | 17:01:35.402 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 17:01:35.402599 | orchestrator | 17:01:35.402 STDOUT terraform:  } 2025-08-29 17:01:35.402607 | orchestrator | 17:01:35.402 STDOUT terraform:  } 2025-08-29 17:01:35.402630 | orchestrator | 17:01:35.402 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 17:01:35.402652 | orchestrator | 17:01:35.402 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 17:01:35.402675 | orchestrator | 17:01:35.402 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.402766 | orchestrator | 17:01:35.402 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 17:01:35.402773 | orchestrator | 17:01:35.402 STDOUT terraform:  + output = (known after apply) 2025-08-29 17:01:35.402777 | orchestrator | 17:01:35.402 STDOUT terraform:  } 2025-08-29 17:01:35.402781 | orchestrator | 17:01:35.402 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 17:01:35.402791 | orchestrator | 17:01:35.402 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 17:01:35.402797 | orchestrator | 17:01:35.402 STDOUT terraform:  + id = (known after apply) 2025-08-29 17:01:35.402801 | orchestrator | 17:01:35.402 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 17:01:35.402818 | orchestrator | 17:01:35.402 STDOUT terraform:  + output = (known after apply) 2025-08-29 17:01:35.402824 | orchestrator | 17:01:35.402 STDOUT terraform:  } 2025-08-29 17:01:35.402854 | orchestrator | 17:01:35.402 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 17:01:35.402868 | orchestrator | 17:01:35.402 STDOUT terraform: Changes to Outputs: 2025-08-29 17:01:35.402892 | orchestrator | 17:01:35.402 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 17:01:35.402916 | orchestrator | 17:01:35.402 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 17:01:35.579382 | orchestrator | 17:01:35.579 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 17:01:35.580287 | orchestrator | 17:01:35.580 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=ed06ef58-b26c-724e-8e08-febfe92a4c8c] 2025-08-29 17:01:35.580864 | orchestrator | 17:01:35.580 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 17:01:35.581890 | orchestrator | 17:01:35.581 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=8f212534-31ff-7afc-46d7-f0ba17536571] 2025-08-29 17:01:35.609335 | orchestrator | 17:01:35.609 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 17:01:35.609402 | orchestrator | 17:01:35.609 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 17:01:35.617136 | orchestrator | 17:01:35.617 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 17:01:35.620007 | orchestrator | 17:01:35.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 17:01:35.620043 | orchestrator | 17:01:35.619 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 17:01:35.623884 | orchestrator | 17:01:35.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 17:01:35.623916 | orchestrator | 17:01:35.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 17:01:35.625611 | orchestrator | 17:01:35.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 17:01:35.627029 | orchestrator | 17:01:35.626 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 17:01:35.629327 | orchestrator | 17:01:35.629 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 17:01:36.091259 | orchestrator | 17:01:36.090 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 17:01:36.097807 | orchestrator | 17:01:36.097 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 17:01:36.107021 | orchestrator | 17:01:36.106 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 17:01:36.116332 | orchestrator | 17:01:36.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 17:01:36.134523 | orchestrator | 17:01:36.133 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-08-29 17:01:36.138856 | orchestrator | 17:01:36.138 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 17:01:36.710467 | orchestrator | 17:01:36.710 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=450970d5-0add-445b-a39b-95f9ecd0f2e5] 2025-08-29 17:01:36.725786 | orchestrator | 17:01:36.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 17:01:39.273567 | orchestrator | 17:01:39.273 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=a18b030a-ae85-4637-b6b5-bac67700b18c] 2025-08-29 17:01:39.287934 | orchestrator | 17:01:39.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=09270e93-6558-41e1-b148-ad056c65a217] 2025-08-29 17:01:39.291554 | orchestrator | 17:01:39.291 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 17:01:39.299766 | orchestrator | 17:01:39.299 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=20300dc2-4158-438d-b195-18b8d76d00ae] 2025-08-29 17:01:39.299804 | orchestrator | 17:01:39.299 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 17:01:39.304295 | orchestrator | 17:01:39.304 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 17:01:39.317050 | orchestrator | 17:01:39.316 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=57070356-ca6b-46ac-b3ca-d106a6094fff] 2025-08-29 17:01:39.320733 | orchestrator | 17:01:39.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=5cc89214-04a9-4a5a-ac59-f5bd895bbd87] 2025-08-29 17:01:39.322219 | orchestrator | 17:01:39.322 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 17:01:39.324450 | orchestrator | 17:01:39.324 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=8cf5a937-7553-474f-9654-82589e52b79f] 2025-08-29 17:01:39.326364 | orchestrator | 17:01:39.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 17:01:39.332573 | orchestrator | 17:01:39.332 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 17:01:39.375217 | orchestrator | 17:01:39.374 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=eb850900-8a70-4f68-bf30-0b7ae8c748a0] 2025-08-29 17:01:39.387338 | orchestrator | 17:01:39.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=e457a33d-5293-40a2-9d8c-11847a0f2527] 2025-08-29 17:01:39.390403 | orchestrator | 17:01:39.390 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 17:01:39.395244 | orchestrator | 17:01:39.395 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 17:01:39.398486 | orchestrator | 17:01:39.398 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=06b0751e365f91a896aff58d7b10c52384bdd176] 2025-08-29 17:01:39.400076 | orchestrator | 17:01:39.399 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=a25e052874c068b353b6f41f3c187b3cfb1e374a] 2025-08-29 17:01:39.403935 | orchestrator | 17:01:39.403 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=370f8e9e-996a-4d39-adb3-26d918a9c02e] 2025-08-29 17:01:39.404098 | orchestrator | 17:01:39.404 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 17:01:40.098677 | orchestrator | 17:01:40.098 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=ec837cdc-1e29-4e10-9703-468e978b2daa] 2025-08-29 17:01:40.387050 | orchestrator | 17:01:40.386 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=2714edc9-b661-41ce-8a20-49048f5f0ea4] 2025-08-29 17:01:40.396727 | orchestrator | 17:01:40.396 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 17:01:42.656667 | orchestrator | 17:01:42.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=1d77be04-615d-47f2-877f-564a4cbf903e] 2025-08-29 17:01:42.723474 | orchestrator | 17:01:42.723 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=f31d3c31-ee4b-483f-a3c2-6492dae07e0e] 2025-08-29 17:01:42.724201 | orchestrator | 17:01:42.723 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ee02ac66-7081-4f67-9e89-908cf88442b2] 2025-08-29 17:01:42.747631 | orchestrator | 17:01:42.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=82ede1f5-1152-49fb-8657-6e3d9aa6c6b6] 2025-08-29 17:01:42.762234 | orchestrator | 17:01:42.761 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=62d76b6d-8e4f-4307-bfb3-201fe97ea00b] 2025-08-29 17:01:42.769124 | orchestrator | 17:01:42.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=53d0bb56-43d8-4988-b520-f0487c65e4d2] 2025-08-29 17:01:42.869919 | orchestrator | 17:01:42.869 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=e1ade903-833e-42c2-ad57-a9363bf92237] 2025-08-29 17:01:42.882624 | orchestrator | 17:01:42.882 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 17:01:42.885708 | orchestrator | 17:01:42.885 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 17:01:42.887607 | orchestrator | 17:01:42.887 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 17:01:43.060514 | orchestrator | 17:01:43.060 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a05d7076-59fa-40e6-834c-bee6eb605aed] 2025-08-29 17:01:43.074872 | orchestrator | 17:01:43.074 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 17:01:43.074942 | orchestrator | 17:01:43.074 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 17:01:43.077367 | orchestrator | 17:01:43.077 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 17:01:43.080811 | orchestrator | 17:01:43.080 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 17:01:43.083706 | orchestrator | 17:01:43.083 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 17:01:43.085670 | orchestrator | 17:01:43.085 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 17:01:43.095056 | orchestrator | 17:01:43.094 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 17:01:43.095112 | orchestrator | 17:01:43.094 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 17:01:43.145399 | orchestrator | 17:01:43.145 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b4ddbd69-75a6-4fbd-ab7c-b6a7894c35d6] 2025-08-29 17:01:43.157046 | orchestrator | 17:01:43.156 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 17:01:43.501051 | orchestrator | 17:01:43.500 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=51a05d2b-a5f4-47cd-bcac-3f23e01e8742] 2025-08-29 17:01:43.516062 | orchestrator | 17:01:43.515 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 17:01:43.719318 | orchestrator | 17:01:43.718 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=01593d07-a60b-4224-80c4-490ef04c6026] 2025-08-29 17:01:43.726671 | orchestrator | 17:01:43.726 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 17:01:43.753040 | orchestrator | 17:01:43.752 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=80dbf7cc-5a40-47a3-aba2-6cda4df7274d] 2025-08-29 17:01:43.759695 | orchestrator | 17:01:43.759 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 17:01:43.826873 | orchestrator | 17:01:43.826 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b60b328a-d1f4-4ff1-811f-1c22ebe6843e] 2025-08-29 17:01:43.837491 | orchestrator | 17:01:43.837 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 17:01:43.840141 | orchestrator | 17:01:43.839 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=98c6314c-de44-4bdd-acf2-bb6c55eb381d] 2025-08-29 17:01:43.844227 | orchestrator | 17:01:43.844 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 17:01:43.861431 | orchestrator | 17:01:43.861 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7d604af7-0937-4cac-8913-adab33ac86df] 2025-08-29 17:01:43.865638 | orchestrator | 17:01:43.865 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 17:01:43.902627 | orchestrator | 17:01:43.902 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=6eff4ea2-49bb-44ea-b59d-7187fe76bc95] 2025-08-29 17:01:43.919178 | orchestrator | 17:01:43.919 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 17:01:43.929128 | orchestrator | 17:01:43.928 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=ab60a0fe-1769-4080-ad39-4b0a8d8fe0f6] 2025-08-29 17:01:43.931122 | orchestrator | 17:01:43.930 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b341868c-b666-4704-974e-a6429d666199] 2025-08-29 17:01:44.085886 | orchestrator | 17:01:44.085 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=ea805e42-8c7d-455d-beb0-de7a3b0ae931] 2025-08-29 17:01:44.120435 | orchestrator | 17:01:44.120 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6fee8538-748d-4167-a286-d16e5086f161] 2025-08-29 17:01:44.413202 | orchestrator | 17:01:44.412 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e941788f-3fc5-4c8d-9efc-e9aeb174d48a] 2025-08-29 17:01:44.496319 | orchestrator | 17:01:44.495 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=a2d0f9fa-ae04-4640-a677-1ad29b62df34] 2025-08-29 17:01:44.564400 | orchestrator | 17:01:44.564 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=53d9c0da-51a9-4115-846e-bc131bdcf54c] 2025-08-29 17:01:44.730809 | orchestrator | 17:01:44.730 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=68c7e0f5-1796-42ac-999c-28d8d18dec60] 2025-08-29 17:01:44.823903 | orchestrator | 17:01:44.823 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=6b6cc078-9130-4e68-ad9f-af68554d2bdd] 2025-08-29 17:01:45.142605 | orchestrator | 17:01:45.142 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=0ac2fa58-5dbe-4359-b6b4-ecdee0176de2] 2025-08-29 17:01:45.153485 | orchestrator | 17:01:45.153 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 17:01:45.177009 | orchestrator | 17:01:45.176 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 17:01:45.179362 | orchestrator | 17:01:45.179 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 17:01:45.180959 | orchestrator | 17:01:45.180 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 17:01:45.190166 | orchestrator | 17:01:45.190 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 17:01:45.198511 | orchestrator | 17:01:45.198 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 17:01:45.199160 | orchestrator | 17:01:45.199 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 17:01:46.604616 | orchestrator | 17:01:46.604 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=4cc874ea-2c08-4bcf-bc4f-c98af5fa2d4d] 2025-08-29 17:01:46.614910 | orchestrator | 17:01:46.614 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 17:01:46.619958 | orchestrator | 17:01:46.619 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 17:01:46.628427 | orchestrator | 17:01:46.628 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 17:01:46.629367 | orchestrator | 17:01:46.629 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fcefe706c959a13117c049481ea83cda6e12f99c] 2025-08-29 17:01:46.633824 | orchestrator | 17:01:46.633 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=7803af64aa968d83d92f91de29fbe37827dab8ae] 2025-08-29 17:01:48.489092 | orchestrator | 17:01:48.488 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4cc874ea-2c08-4bcf-bc4f-c98af5fa2d4d] 2025-08-29 17:01:55.181287 | orchestrator | 17:01:55.180 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 17:01:55.181445 | orchestrator | 17:01:55.181 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 17:01:55.181680 | orchestrator | 17:01:55.181 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 17:01:55.191448 | orchestrator | 17:01:55.191 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 17:01:55.201087 | orchestrator | 17:01:55.200 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 17:01:55.201177 | orchestrator | 17:01:55.201 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 17:02:05.182213 | orchestrator | 17:02:05.181 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 17:02:05.182333 | orchestrator | 17:02:05.182 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 17:02:05.182487 | orchestrator | 17:02:05.182 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 17:02:05.192294 | orchestrator | 17:02:05.192 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 17:02:05.202210 | orchestrator | 17:02:05.201 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 17:02:05.202283 | orchestrator | 17:02:05.201 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 17:02:05.595763 | orchestrator | 17:02:05.595 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=68f2c064-d3eb-4034-bfed-f24e335e5a2d] 2025-08-29 17:02:05.665383 | orchestrator | 17:02:05.665 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=b8a5210f-db45-4315-a0aa-53286cea9d05] 2025-08-29 17:02:06.165294 | orchestrator | 17:02:06.164 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=630def75-7b25-4713-b29f-5bc3eccb86c0] 2025-08-29 17:02:15.183887 | orchestrator | 17:02:15.183 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-08-29 17:02:15.192956 | orchestrator | 17:02:15.192 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-08-29 17:02:15.202352 | orchestrator | 17:02:15.202 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 17:02:15.898329 | orchestrator | 17:02:15.897 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=f3f16e59-d92d-4e45-bae8-b6f6b0b5bca2] 2025-08-29 17:02:15.900470 | orchestrator | 17:02:15.900 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=8cd3aae5-545b-4f70-9062-9304059e5ee5] 2025-08-29 17:02:16.004784 | orchestrator | 17:02:16.004 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=d52f50ad-69fd-46c8-ae8d-b3a3fecc28cc] 2025-08-29 17:02:16.026044 | orchestrator | 17:02:16.025 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 17:02:16.029092 | orchestrator | 17:02:16.028 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 17:02:16.029127 | orchestrator | 17:02:16.029 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5643894878335803769] 2025-08-29 17:02:16.030704 | orchestrator | 17:02:16.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 17:02:16.031501 | orchestrator | 17:02:16.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 17:02:16.031620 | orchestrator | 17:02:16.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 17:02:16.033523 | orchestrator | 17:02:16.033 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 17:02:16.046134 | orchestrator | 17:02:16.046 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 17:02:16.048133 | orchestrator | 17:02:16.048 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 17:02:16.048962 | orchestrator | 17:02:16.048 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 17:02:16.049295 | orchestrator | 17:02:16.049 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 17:02:16.055458 | orchestrator | 17:02:16.055 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 17:02:19.409497 | orchestrator | 17:02:19.409 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=b8a5210f-db45-4315-a0aa-53286cea9d05/09270e93-6558-41e1-b148-ad056c65a217] 2025-08-29 17:02:19.421684 | orchestrator | 17:02:19.421 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=68f2c064-d3eb-4034-bfed-f24e335e5a2d/370f8e9e-996a-4d39-adb3-26d918a9c02e] 2025-08-29 17:02:19.439071 | orchestrator | 17:02:19.438 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=f3f16e59-d92d-4e45-bae8-b6f6b0b5bca2/eb850900-8a70-4f68-bf30-0b7ae8c748a0] 2025-08-29 17:02:19.456667 | orchestrator | 17:02:19.456 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=68f2c064-d3eb-4034-bfed-f24e335e5a2d/5cc89214-04a9-4a5a-ac59-f5bd895bbd87] 2025-08-29 17:02:19.499272 | orchestrator | 17:02:19.498 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=b8a5210f-db45-4315-a0aa-53286cea9d05/57070356-ca6b-46ac-b3ca-d106a6094fff] 2025-08-29 17:02:19.508600 | orchestrator | 17:02:19.508 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=f3f16e59-d92d-4e45-bae8-b6f6b0b5bca2/e457a33d-5293-40a2-9d8c-11847a0f2527] 2025-08-29 17:02:25.581429 | orchestrator | 17:02:25.581 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=b8a5210f-db45-4315-a0aa-53286cea9d05/20300dc2-4158-438d-b195-18b8d76d00ae] 2025-08-29 17:02:25.585236 | orchestrator | 17:02:25.585 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=68f2c064-d3eb-4034-bfed-f24e335e5a2d/8cf5a937-7553-474f-9654-82589e52b79f] 2025-08-29 17:02:25.621498 | orchestrator | 17:02:25.621 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=f3f16e59-d92d-4e45-bae8-b6f6b0b5bca2/a18b030a-ae85-4637-b6b5-bac67700b18c] 2025-08-29 17:02:26.059070 | orchestrator | 17:02:26.058 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 17:02:36.059866 | orchestrator | 17:02:36.059 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 17:02:36.377255 | orchestrator | 17:02:36.376 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=56858d2f-cf01-43c5-921e-b251b8e5d422] 2025-08-29 17:02:36.392844 | orchestrator | 17:02:36.392 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 17:02:36.392915 | orchestrator | 17:02:36.392 STDOUT terraform: Outputs: 2025-08-29 17:02:36.392932 | orchestrator | 17:02:36.392 STDOUT terraform: manager_address = 2025-08-29 17:02:36.392943 | orchestrator | 17:02:36.392 STDOUT terraform: private_key = 2025-08-29 17:02:36.854480 | orchestrator | ok: Runtime: 0:01:10.178584 2025-08-29 17:02:36.895211 | 2025-08-29 17:02:36.895454 | TASK [Fetch manager address] 2025-08-29 17:02:37.329508 | orchestrator | ok 2025-08-29 17:02:37.339984 | 2025-08-29 17:02:37.340122 | TASK [Set manager_host address] 2025-08-29 17:02:37.421538 | orchestrator | ok 2025-08-29 17:02:37.431514 | 2025-08-29 17:02:37.431679 | LOOP [Update ansible collections] 2025-08-29 17:02:38.150757 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 17:02:38.151290 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 17:02:38.151367 | orchestrator | Starting galaxy collection install process 2025-08-29 17:02:38.151414 | orchestrator | Process install dependency map 2025-08-29 17:02:38.151455 | orchestrator | Starting collection install process 2025-08-29 17:02:38.151491 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 17:02:38.151533 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-08-29 17:02:38.151577 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 17:02:38.151667 | orchestrator | ok: Item: commons Runtime: 0:00:00.427247 2025-08-29 17:02:38.891274 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 17:02:38.891695 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 17:02:38.891785 | orchestrator | Starting galaxy collection install process 2025-08-29 17:02:38.891839 | orchestrator | Process install dependency map 2025-08-29 17:02:38.891886 | orchestrator | Starting collection install process 2025-08-29 17:02:38.891929 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-08-29 17:02:38.891973 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-08-29 17:02:38.892013 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 17:02:38.892098 | orchestrator | ok: Item: services Runtime: 0:00:00.511164 2025-08-29 17:02:38.920968 | 2025-08-29 17:02:38.921178 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 17:02:49.464367 | orchestrator | ok 2025-08-29 17:02:49.474709 | 2025-08-29 17:02:49.474863 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 17:03:49.526279 | orchestrator | ok 2025-08-29 17:03:49.537600 | 2025-08-29 17:03:49.537717 | TASK [Fetch manager ssh hostkey] 2025-08-29 17:03:51.114349 | orchestrator | Output suppressed because no_log was given 2025-08-29 17:03:51.129565 | 2025-08-29 17:03:51.129732 | TASK [Get ssh keypair from terraform environment] 2025-08-29 17:03:51.673441 | orchestrator | ok: Runtime: 0:00:00.010417 2025-08-29 17:03:51.690182 | 2025-08-29 17:03:51.690347 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 17:03:51.740265 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 17:03:51.750666 | 2025-08-29 17:03:51.750798 | TASK [Run manager part 0] 2025-08-29 17:03:52.558956 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 17:03:52.600774 | orchestrator | 2025-08-29 17:03:52.600814 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 17:03:52.600822 | orchestrator | 2025-08-29 17:03:52.600834 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 17:03:54.541672 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:54.541725 | orchestrator | 2025-08-29 17:03:54.541747 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 17:03:54.541757 | orchestrator | 2025-08-29 17:03:54.541766 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:03:56.483081 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:56.483125 | orchestrator | 2025-08-29 17:03:56.483133 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 17:03:57.160835 | orchestrator | ok: [testbed-manager] 2025-08-29 17:03:57.160888 | orchestrator | 2025-08-29 17:03:57.160896 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 17:03:57.212311 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.212329 | orchestrator | 2025-08-29 17:03:57.212336 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 17:03:57.240750 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.240765 | orchestrator | 2025-08-29 17:03:57.240770 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 17:03:57.266129 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.266158 | orchestrator | 2025-08-29 17:03:57.266162 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 17:03:57.289506 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.289518 | orchestrator | 2025-08-29 17:03:57.289522 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 17:03:57.328991 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.329038 | orchestrator | 2025-08-29 17:03:57.329046 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 17:03:57.366578 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.366612 | orchestrator | 2025-08-29 17:03:57.366619 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 17:03:57.393864 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:03:57.393880 | orchestrator | 2025-08-29 17:03:57.393886 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 17:03:58.163601 | orchestrator | changed: [testbed-manager] 2025-08-29 17:03:58.163651 | orchestrator | 2025-08-29 17:03:58.163658 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 17:06:45.195683 | orchestrator | changed: [testbed-manager] 2025-08-29 17:06:45.195726 | orchestrator | 2025-08-29 17:06:45.195735 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 17:08:33.161716 | orchestrator | changed: [testbed-manager] 2025-08-29 17:08:33.161822 | orchestrator | 2025-08-29 17:08:33.161838 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 17:09:00.080949 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:00.081080 | orchestrator | 2025-08-29 17:09:00.081101 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 17:09:09.383223 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:09.383304 | orchestrator | 2025-08-29 17:09:09.383320 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 17:09:09.427908 | orchestrator | ok: [testbed-manager] 2025-08-29 17:09:09.427983 | orchestrator | 2025-08-29 17:09:09.427997 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 17:09:10.193293 | orchestrator | ok: [testbed-manager] 2025-08-29 17:09:10.193439 | orchestrator | 2025-08-29 17:09:10.193457 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 17:09:10.921927 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:10.922057 | orchestrator | 2025-08-29 17:09:10.922076 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 17:09:17.771617 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:17.771705 | orchestrator | 2025-08-29 17:09:17.771742 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 17:09:24.323790 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:24.323876 | orchestrator | 2025-08-29 17:09:24.323892 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 17:09:27.099662 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:27.099748 | orchestrator | 2025-08-29 17:09:27.099765 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 17:09:29.175949 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:29.176054 | orchestrator | 2025-08-29 17:09:29.176070 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 17:09:30.369082 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 17:09:30.369173 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 17:09:30.369188 | orchestrator | 2025-08-29 17:09:30.369201 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 17:09:30.411724 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 17:09:30.411768 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 17:09:30.411774 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 17:09:30.411779 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 17:09:33.702719 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 17:09:33.702796 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 17:09:33.702810 | orchestrator | 2025-08-29 17:09:33.702820 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 17:09:34.311819 | orchestrator | changed: [testbed-manager] 2025-08-29 17:09:34.311862 | orchestrator | 2025-08-29 17:09:34.311871 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 17:12:56.737753 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 17:12:56.737858 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 17:12:56.737876 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 17:12:56.737889 | orchestrator | 2025-08-29 17:12:56.737901 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 17:12:59.377072 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 17:12:59.377172 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 17:12:59.377188 | orchestrator | 2025-08-29 17:12:59.377200 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 17:12:59.377213 | orchestrator | 2025-08-29 17:12:59.377224 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:13:00.844152 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:00.844238 | orchestrator | 2025-08-29 17:13:00.844257 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 17:13:00.891614 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:00.891686 | orchestrator | 2025-08-29 17:13:00.891700 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 17:13:00.956757 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:00.956830 | orchestrator | 2025-08-29 17:13:00.956843 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 17:13:01.752094 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:01.752215 | orchestrator | 2025-08-29 17:13:01.752232 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 17:13:02.530252 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:02.530293 | orchestrator | 2025-08-29 17:13:02.530301 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 17:13:03.964275 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 17:13:03.964366 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 17:13:03.964381 | orchestrator | 2025-08-29 17:13:03.964408 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 17:13:05.350567 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:05.350623 | orchestrator | 2025-08-29 17:13:05.350632 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 17:13:07.193636 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:13:07.193820 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 17:13:07.193835 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:13:07.193848 | orchestrator | 2025-08-29 17:13:07.193860 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 17:13:07.247898 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:07.247951 | orchestrator | 2025-08-29 17:13:07.247958 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 17:13:07.902929 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:07.903048 | orchestrator | 2025-08-29 17:13:07.903067 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 17:13:07.972005 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:07.972088 | orchestrator | 2025-08-29 17:13:07.972103 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 17:13:08.854614 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:13:08.854701 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:08.854717 | orchestrator | 2025-08-29 17:13:08.854730 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 17:13:08.888291 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:08.888368 | orchestrator | 2025-08-29 17:13:08.888382 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 17:13:08.917863 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:08.917907 | orchestrator | 2025-08-29 17:13:08.917919 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 17:13:08.948845 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:08.948890 | orchestrator | 2025-08-29 17:13:08.948902 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 17:13:08.992088 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:08.992141 | orchestrator | 2025-08-29 17:13:08.992148 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 17:13:09.749003 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:09.749133 | orchestrator | 2025-08-29 17:13:09.749150 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 17:13:09.749163 | orchestrator | 2025-08-29 17:13:09.749175 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:13:11.200685 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:11.200782 | orchestrator | 2025-08-29 17:13:11.200800 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 17:13:12.177989 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:12.178061 | orchestrator | 2025-08-29 17:13:12.178070 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:13:12.178078 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 17:13:12.178085 | orchestrator | 2025-08-29 17:13:12.613048 | orchestrator | ok: Runtime: 0:09:20.230124 2025-08-29 17:13:12.637155 | 2025-08-29 17:13:12.637356 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 17:13:12.685286 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 17:13:12.694537 | 2025-08-29 17:13:12.694652 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 17:13:12.736041 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 17:13:12.747243 | 2025-08-29 17:13:12.747380 | TASK [Run manager part 1 + 2] 2025-08-29 17:13:13.572947 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 17:13:13.625653 | orchestrator | 2025-08-29 17:13:13.625700 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 17:13:13.625707 | orchestrator | 2025-08-29 17:13:13.625720 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:13:16.561405 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:16.561449 | orchestrator | 2025-08-29 17:13:16.561472 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 17:13:16.602694 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:16.602736 | orchestrator | 2025-08-29 17:13:16.602746 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 17:13:16.641393 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:16.641435 | orchestrator | 2025-08-29 17:13:16.641448 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 17:13:16.678883 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:16.678922 | orchestrator | 2025-08-29 17:13:16.678931 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 17:13:16.737905 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:16.737950 | orchestrator | 2025-08-29 17:13:16.737960 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 17:13:16.804279 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:16.804320 | orchestrator | 2025-08-29 17:13:16.804330 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 17:13:16.846751 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 17:13:16.846782 | orchestrator | 2025-08-29 17:13:16.846787 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 17:13:17.704810 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:17.704853 | orchestrator | 2025-08-29 17:13:17.704864 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 17:13:17.750769 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:17.750807 | orchestrator | 2025-08-29 17:13:17.750815 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 17:13:19.095497 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:19.095533 | orchestrator | 2025-08-29 17:13:19.095540 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 17:13:19.691084 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:19.691169 | orchestrator | 2025-08-29 17:13:19.691179 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 17:13:20.871894 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:20.871951 | orchestrator | 2025-08-29 17:13:20.871968 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 17:13:38.259759 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:38.259799 | orchestrator | 2025-08-29 17:13:38.259805 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 17:13:38.974653 | orchestrator | ok: [testbed-manager] 2025-08-29 17:13:38.974741 | orchestrator | 2025-08-29 17:13:38.974759 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 17:13:39.032039 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:39.032103 | orchestrator | 2025-08-29 17:13:39.032111 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 17:13:40.056434 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:40.057322 | orchestrator | 2025-08-29 17:13:40.057348 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 17:13:41.076861 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:41.076904 | orchestrator | 2025-08-29 17:13:41.076913 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 17:13:41.676764 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:41.676823 | orchestrator | 2025-08-29 17:13:41.676837 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 17:13:41.716624 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 17:13:41.716696 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 17:13:41.716709 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 17:13:41.716721 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 17:13:43.600482 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:43.600557 | orchestrator | 2025-08-29 17:13:43.600572 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 17:13:53.680186 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 17:13:53.680285 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 17:13:53.680303 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 17:13:53.680315 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 17:13:53.680334 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 17:13:53.680345 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 17:13:53.680356 | orchestrator | 2025-08-29 17:13:53.680368 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 17:13:54.806734 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:54.806815 | orchestrator | 2025-08-29 17:13:54.806831 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 17:13:54.849742 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:54.849813 | orchestrator | 2025-08-29 17:13:54.849826 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 17:13:57.958878 | orchestrator | changed: [testbed-manager] 2025-08-29 17:13:57.958969 | orchestrator | 2025-08-29 17:13:57.958985 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 17:13:58.000465 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:13:58.000533 | orchestrator | 2025-08-29 17:13:58.000547 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 17:15:47.829516 | orchestrator | changed: [testbed-manager] 2025-08-29 17:15:47.829627 | orchestrator | 2025-08-29 17:15:47.829645 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 17:15:49.248180 | orchestrator | ok: [testbed-manager] 2025-08-29 17:15:49.248393 | orchestrator | 2025-08-29 17:15:49.248413 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:15:49.248427 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 17:15:49.248439 | orchestrator | 2025-08-29 17:15:49.397249 | orchestrator | ok: Runtime: 0:02:36.279463 2025-08-29 17:15:49.409826 | 2025-08-29 17:15:49.409954 | TASK [Reboot manager] 2025-08-29 17:15:50.944019 | orchestrator | ok: Runtime: 0:00:01.020791 2025-08-29 17:15:50.957323 | 2025-08-29 17:15:50.957491 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 17:16:07.450452 | orchestrator | ok 2025-08-29 17:16:07.461938 | 2025-08-29 17:16:07.462070 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 17:17:07.512204 | orchestrator | ok 2025-08-29 17:17:07.521924 | 2025-08-29 17:17:07.522061 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 17:17:10.195056 | orchestrator | 2025-08-29 17:17:10.195271 | orchestrator | # DEPLOY MANAGER 2025-08-29 17:17:10.195344 | orchestrator | 2025-08-29 17:17:10.195360 | orchestrator | + set -e 2025-08-29 17:17:10.195374 | orchestrator | + echo 2025-08-29 17:17:10.195388 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 17:17:10.195405 | orchestrator | + echo 2025-08-29 17:17:10.195456 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 17:17:10.198804 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 17:17:10.198829 | orchestrator | 2025-08-29 17:17:10.198843 | orchestrator | export CEPH_VERSION=reef 2025-08-29 17:17:10.198856 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 17:17:10.198868 | orchestrator | export MANAGER_VERSION=9.2.0 2025-08-29 17:17:10.198890 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 17:17:10.198901 | orchestrator | 2025-08-29 17:17:10.198919 | orchestrator | export ARA=false 2025-08-29 17:17:10.198930 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 17:17:10.198948 | orchestrator | export TEMPEST=false 2025-08-29 17:17:10.198959 | orchestrator | export IS_ZUUL=true 2025-08-29 17:17:10.198970 | orchestrator | 2025-08-29 17:17:10.198988 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:17:10.199000 | orchestrator | export EXTERNAL_API=false 2025-08-29 17:17:10.199010 | orchestrator | 2025-08-29 17:17:10.199021 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 17:17:10.199034 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 17:17:10.199045 | orchestrator | 2025-08-29 17:17:10.199056 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 17:17:10.199363 | orchestrator | 2025-08-29 17:17:10.199380 | orchestrator | + echo 2025-08-29 17:17:10.199392 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 17:17:10.200581 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 17:17:10.200598 | orchestrator | ++ INTERACTIVE=false 2025-08-29 17:17:10.200741 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 17:17:10.200757 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 17:17:10.200857 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 17:17:10.200872 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 17:17:10.200883 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 17:17:10.200898 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 17:17:10.200910 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 17:17:10.200921 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 17:17:10.200932 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 17:17:10.200943 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 17:17:10.200954 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 17:17:10.200968 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 17:17:10.200987 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 17:17:10.200998 | orchestrator | ++ export ARA=false 2025-08-29 17:17:10.201009 | orchestrator | ++ ARA=false 2025-08-29 17:17:10.201020 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 17:17:10.201031 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 17:17:10.201045 | orchestrator | ++ export TEMPEST=false 2025-08-29 17:17:10.201056 | orchestrator | ++ TEMPEST=false 2025-08-29 17:17:10.201067 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 17:17:10.201078 | orchestrator | ++ IS_ZUUL=true 2025-08-29 17:17:10.201089 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:17:10.201100 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:17:10.201207 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 17:17:10.201223 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 17:17:10.201234 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 17:17:10.201245 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 17:17:10.201326 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 17:17:10.201341 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 17:17:10.201547 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 17:17:10.201607 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 17:17:10.201664 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 17:17:10.263408 | orchestrator | + docker version 2025-08-29 17:17:10.563632 | orchestrator | Client: Docker Engine - Community 2025-08-29 17:17:10.563708 | orchestrator | Version: 27.5.1 2025-08-29 17:17:10.563717 | orchestrator | API version: 1.47 2025-08-29 17:17:10.563722 | orchestrator | Go version: go1.22.11 2025-08-29 17:17:10.563728 | orchestrator | Git commit: 9f9e405 2025-08-29 17:17:10.563733 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 17:17:10.563739 | orchestrator | OS/Arch: linux/amd64 2025-08-29 17:17:10.563744 | orchestrator | Context: default 2025-08-29 17:17:10.563749 | orchestrator | 2025-08-29 17:17:10.563755 | orchestrator | Server: Docker Engine - Community 2025-08-29 17:17:10.563760 | orchestrator | Engine: 2025-08-29 17:17:10.563774 | orchestrator | Version: 27.5.1 2025-08-29 17:17:10.563779 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 17:17:10.563805 | orchestrator | Go version: go1.22.11 2025-08-29 17:17:10.563810 | orchestrator | Git commit: 4c9b3b0 2025-08-29 17:17:10.563815 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 17:17:10.563820 | orchestrator | OS/Arch: linux/amd64 2025-08-29 17:17:10.563825 | orchestrator | Experimental: false 2025-08-29 17:17:10.563831 | orchestrator | containerd: 2025-08-29 17:17:10.563836 | orchestrator | Version: 1.7.27 2025-08-29 17:17:10.563841 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 17:17:10.563847 | orchestrator | runc: 2025-08-29 17:17:10.563852 | orchestrator | Version: 1.2.5 2025-08-29 17:17:10.563857 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 17:17:10.563862 | orchestrator | docker-init: 2025-08-29 17:17:10.563943 | orchestrator | Version: 0.19.0 2025-08-29 17:17:10.563952 | orchestrator | GitCommit: de40ad0 2025-08-29 17:17:10.568823 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 17:17:10.580474 | orchestrator | + set -e 2025-08-29 17:17:10.580502 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 17:17:10.580512 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 17:17:10.580521 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 17:17:10.580529 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 17:17:10.580536 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 17:17:10.580544 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 17:17:10.580552 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 17:17:10.580559 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 17:17:10.580567 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 17:17:10.580574 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 17:17:10.580581 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 17:17:10.580588 | orchestrator | ++ export ARA=false 2025-08-29 17:17:10.580596 | orchestrator | ++ ARA=false 2025-08-29 17:17:10.580603 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 17:17:10.580610 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 17:17:10.580617 | orchestrator | ++ export TEMPEST=false 2025-08-29 17:17:10.580624 | orchestrator | ++ TEMPEST=false 2025-08-29 17:17:10.580631 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 17:17:10.580638 | orchestrator | ++ IS_ZUUL=true 2025-08-29 17:17:10.580645 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:17:10.580652 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:17:10.580659 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 17:17:10.580666 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 17:17:10.580673 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 17:17:10.580680 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 17:17:10.580688 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 17:17:10.580695 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 17:17:10.580702 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 17:17:10.580709 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 17:17:10.580722 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 17:17:10.580730 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 17:17:10.580738 | orchestrator | ++ INTERACTIVE=false 2025-08-29 17:17:10.580745 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 17:17:10.580757 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 17:17:10.580765 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 17:17:10.580773 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-08-29 17:17:10.587133 | orchestrator | + set -e 2025-08-29 17:17:10.587147 | orchestrator | + VERSION=9.2.0 2025-08-29 17:17:10.587157 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 17:17:10.595956 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 17:17:10.595974 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 17:17:10.600429 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-08-29 17:17:10.606012 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-08-29 17:17:10.615938 | orchestrator | /opt/configuration ~ 2025-08-29 17:17:10.615960 | orchestrator | + set -e 2025-08-29 17:17:10.615973 | orchestrator | + pushd /opt/configuration 2025-08-29 17:17:10.615985 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 17:17:10.617696 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 17:17:10.619598 | orchestrator | ++ deactivate nondestructive 2025-08-29 17:17:10.619617 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:10.619631 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:10.619661 | orchestrator | ++ hash -r 2025-08-29 17:17:10.619672 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:10.619682 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 17:17:10.619693 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 17:17:10.619704 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 17:17:10.619716 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 17:17:10.619727 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 17:17:10.619737 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 17:17:10.619748 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 17:17:10.619760 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 17:17:10.619771 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 17:17:10.619782 | orchestrator | ++ export PATH 2025-08-29 17:17:10.619798 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:10.619809 | orchestrator | ++ '[' -z '' ']' 2025-08-29 17:17:10.619820 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 17:17:10.619830 | orchestrator | ++ PS1='(venv) ' 2025-08-29 17:17:10.619841 | orchestrator | ++ export PS1 2025-08-29 17:17:10.619852 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 17:17:10.619863 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 17:17:10.619874 | orchestrator | ++ hash -r 2025-08-29 17:17:10.619885 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-08-29 17:17:11.917631 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-08-29 17:17:11.918945 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-08-29 17:17:11.920633 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-08-29 17:17:11.922091 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-08-29 17:17:11.924556 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-08-29 17:17:11.934823 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-08-29 17:17:11.936162 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-08-29 17:17:11.937147 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-08-29 17:17:11.938372 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-08-29 17:17:11.974331 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-08-29 17:17:11.975603 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-08-29 17:17:11.977374 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-08-29 17:17:11.978796 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-08-29 17:17:11.982770 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-08-29 17:17:12.216898 | orchestrator | ++ which gilt 2025-08-29 17:17:12.220667 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-08-29 17:17:12.220692 | orchestrator | + /opt/venv/bin/gilt overlay 2025-08-29 17:17:12.480830 | orchestrator | osism.cfg-generics: 2025-08-29 17:17:12.655890 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-08-29 17:17:12.655990 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-08-29 17:17:12.656016 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-08-29 17:17:12.656089 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-08-29 17:17:13.438521 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-08-29 17:17:13.452489 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-08-29 17:17:13.798336 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-08-29 17:17:13.850898 | orchestrator | ~ 2025-08-29 17:17:13.850954 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 17:17:13.850969 | orchestrator | + deactivate 2025-08-29 17:17:13.850981 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 17:17:13.850994 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 17:17:13.851005 | orchestrator | + export PATH 2025-08-29 17:17:13.851016 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 17:17:13.851027 | orchestrator | + '[' -n '' ']' 2025-08-29 17:17:13.851041 | orchestrator | + hash -r 2025-08-29 17:17:13.851052 | orchestrator | + '[' -n '' ']' 2025-08-29 17:17:13.851063 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 17:17:13.851074 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 17:17:13.851085 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 17:17:13.851096 | orchestrator | + unset -f deactivate 2025-08-29 17:17:13.851107 | orchestrator | + popd 2025-08-29 17:17:13.853393 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 17:17:13.853410 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 17:17:13.853783 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 17:17:13.909797 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 17:17:13.909839 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 17:17:13.909851 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 17:17:14.001395 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 17:17:14.001486 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 17:17:14.001499 | orchestrator | ++ deactivate nondestructive 2025-08-29 17:17:14.001521 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:14.001533 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:14.001790 | orchestrator | ++ hash -r 2025-08-29 17:17:14.001807 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:14.002877 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 17:17:14.002912 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 17:17:14.002926 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 17:17:14.002940 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 17:17:14.002954 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 17:17:14.002967 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 17:17:14.002981 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 17:17:14.002995 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 17:17:14.003010 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 17:17:14.003047 | orchestrator | ++ export PATH 2025-08-29 17:17:14.003060 | orchestrator | ++ '[' -n '' ']' 2025-08-29 17:17:14.003074 | orchestrator | ++ '[' -z '' ']' 2025-08-29 17:17:14.003086 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 17:17:14.003099 | orchestrator | ++ PS1='(venv) ' 2025-08-29 17:17:14.003112 | orchestrator | ++ export PS1 2025-08-29 17:17:14.003126 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 17:17:14.003139 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 17:17:14.003152 | orchestrator | ++ hash -r 2025-08-29 17:17:14.003163 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 17:17:15.343655 | orchestrator | 2025-08-29 17:17:15.343751 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 17:17:15.343758 | orchestrator | 2025-08-29 17:17:15.343763 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 17:17:16.015209 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:16.015351 | orchestrator | 2025-08-29 17:17:16.015368 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 17:17:17.086847 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:17.086947 | orchestrator | 2025-08-29 17:17:17.086962 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 17:17:17.086975 | orchestrator | 2025-08-29 17:17:17.086986 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:17:19.385658 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:19.385775 | orchestrator | 2025-08-29 17:17:19.385793 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 17:17:19.445811 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:19.445902 | orchestrator | 2025-08-29 17:17:19.445917 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 17:17:19.933665 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:19.933772 | orchestrator | 2025-08-29 17:17:19.933790 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 17:17:19.974315 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:19.974419 | orchestrator | 2025-08-29 17:17:19.974435 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 17:17:20.333037 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:20.333127 | orchestrator | 2025-08-29 17:17:20.333141 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 17:17:20.390323 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:20.390438 | orchestrator | 2025-08-29 17:17:20.390455 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 17:17:20.736382 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:20.736491 | orchestrator | 2025-08-29 17:17:20.736506 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 17:17:20.863633 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:20.863729 | orchestrator | 2025-08-29 17:17:20.863742 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 17:17:20.863755 | orchestrator | 2025-08-29 17:17:20.863766 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:17:22.651343 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:22.651429 | orchestrator | 2025-08-29 17:17:22.651443 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 17:17:22.777658 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 17:17:22.777751 | orchestrator | 2025-08-29 17:17:22.777765 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 17:17:22.844056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 17:17:22.844129 | orchestrator | 2025-08-29 17:17:22.844143 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 17:17:24.026088 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 17:17:24.026195 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 17:17:24.026215 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 17:17:24.026228 | orchestrator | 2025-08-29 17:17:24.026241 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 17:17:25.895535 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 17:17:25.895695 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 17:17:25.895721 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 17:17:25.895744 | orchestrator | 2025-08-29 17:17:25.895766 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 17:17:26.618401 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:17:26.618506 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:26.618522 | orchestrator | 2025-08-29 17:17:26.618535 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 17:17:27.330168 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:17:27.330277 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:27.330357 | orchestrator | 2025-08-29 17:17:27.330378 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 17:17:27.390573 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:27.390669 | orchestrator | 2025-08-29 17:17:27.390683 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 17:17:27.821067 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:27.821170 | orchestrator | 2025-08-29 17:17:27.821187 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 17:17:27.931025 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 17:17:27.931118 | orchestrator | 2025-08-29 17:17:27.931133 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 17:17:29.066627 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:29.066739 | orchestrator | 2025-08-29 17:17:29.066765 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 17:17:29.933502 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:29.933598 | orchestrator | 2025-08-29 17:17:29.933611 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 17:17:40.930219 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:40.930363 | orchestrator | 2025-08-29 17:17:40.930399 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 17:17:40.985001 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:40.985103 | orchestrator | 2025-08-29 17:17:40.985118 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 17:17:40.985131 | orchestrator | 2025-08-29 17:17:40.985144 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:17:42.834733 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:42.834844 | orchestrator | 2025-08-29 17:17:42.834862 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 17:17:42.952998 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 17:17:42.953090 | orchestrator | 2025-08-29 17:17:42.953105 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 17:17:43.027896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:17:43.027986 | orchestrator | 2025-08-29 17:17:43.028072 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 17:17:45.719208 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:45.719341 | orchestrator | 2025-08-29 17:17:45.719365 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 17:17:45.776672 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:45.776778 | orchestrator | 2025-08-29 17:17:45.776793 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 17:17:45.912439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 17:17:45.912530 | orchestrator | 2025-08-29 17:17:45.912544 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 17:17:48.883078 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 17:17:48.883193 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 17:17:48.883208 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 17:17:48.883220 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 17:17:48.883231 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 17:17:48.883243 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 17:17:48.883254 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 17:17:48.883265 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 17:17:48.883276 | orchestrator | 2025-08-29 17:17:48.883291 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 17:17:49.540799 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:49.540900 | orchestrator | 2025-08-29 17:17:49.540915 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 17:17:50.191849 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:50.191943 | orchestrator | 2025-08-29 17:17:50.191969 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 17:17:50.286156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 17:17:50.286241 | orchestrator | 2025-08-29 17:17:50.286267 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 17:17:51.606451 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 17:17:51.606541 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 17:17:51.606555 | orchestrator | 2025-08-29 17:17:51.606569 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 17:17:52.271339 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:52.271428 | orchestrator | 2025-08-29 17:17:52.271443 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 17:17:52.331644 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:52.331721 | orchestrator | 2025-08-29 17:17:52.331733 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 17:17:52.394151 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:52.394231 | orchestrator | 2025-08-29 17:17:52.394245 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 17:17:52.457080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 17:17:52.457156 | orchestrator | 2025-08-29 17:17:52.457170 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 17:17:53.992366 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:17:53.992465 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:17:53.992482 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:53.992495 | orchestrator | 2025-08-29 17:17:53.992508 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 17:17:54.695020 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:54.695108 | orchestrator | 2025-08-29 17:17:54.695119 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 17:17:54.743988 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:54.744126 | orchestrator | 2025-08-29 17:17:54.744155 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 17:17:54.863111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 17:17:54.863202 | orchestrator | 2025-08-29 17:17:54.863219 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 17:17:55.424340 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:55.424431 | orchestrator | 2025-08-29 17:17:55.424448 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 17:17:55.859069 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:55.859149 | orchestrator | 2025-08-29 17:17:55.859163 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 17:17:57.166115 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 17:17:57.166223 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 17:17:57.166245 | orchestrator | 2025-08-29 17:17:57.166258 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 17:17:57.893595 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:57.893677 | orchestrator | 2025-08-29 17:17:57.893692 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 17:17:58.324784 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:58.324863 | orchestrator | 2025-08-29 17:17:58.324878 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 17:17:58.721871 | orchestrator | changed: [testbed-manager] 2025-08-29 17:17:58.721964 | orchestrator | 2025-08-29 17:17:58.721980 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 17:17:58.771243 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:17:58.771294 | orchestrator | 2025-08-29 17:17:58.771357 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 17:17:58.841289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 17:17:58.841380 | orchestrator | 2025-08-29 17:17:58.841393 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 17:17:58.898411 | orchestrator | ok: [testbed-manager] 2025-08-29 17:17:58.898513 | orchestrator | 2025-08-29 17:17:58.898527 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 17:18:01.075272 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 17:18:01.075439 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 17:18:01.075458 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 17:18:01.075470 | orchestrator | 2025-08-29 17:18:01.075482 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 17:18:01.826768 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:01.826874 | orchestrator | 2025-08-29 17:18:01.826890 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 17:18:02.585466 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:02.585573 | orchestrator | 2025-08-29 17:18:02.585590 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 17:18:03.335538 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:03.335662 | orchestrator | 2025-08-29 17:18:03.335680 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 17:18:03.422609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 17:18:03.422703 | orchestrator | 2025-08-29 17:18:03.422717 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 17:18:03.478201 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:03.478281 | orchestrator | 2025-08-29 17:18:03.478294 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 17:18:04.260584 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 17:18:04.260699 | orchestrator | 2025-08-29 17:18:04.260714 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 17:18:04.353569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 17:18:04.353670 | orchestrator | 2025-08-29 17:18:04.353685 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 17:18:05.113942 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:05.114094 | orchestrator | 2025-08-29 17:18:05.114111 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 17:18:05.730598 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:05.730701 | orchestrator | 2025-08-29 17:18:05.730719 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 17:18:05.784956 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:18:05.785052 | orchestrator | 2025-08-29 17:18:05.785071 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 17:18:05.844082 | orchestrator | ok: [testbed-manager] 2025-08-29 17:18:05.844169 | orchestrator | 2025-08-29 17:18:05.844184 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 17:18:06.687434 | orchestrator | changed: [testbed-manager] 2025-08-29 17:18:06.687528 | orchestrator | 2025-08-29 17:18:06.687542 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 17:19:18.618758 | orchestrator | changed: [testbed-manager] 2025-08-29 17:19:18.618870 | orchestrator | 2025-08-29 17:19:18.618886 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 17:19:19.546136 | orchestrator | ok: [testbed-manager] 2025-08-29 17:19:19.546231 | orchestrator | 2025-08-29 17:19:19.546245 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 17:19:19.597223 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:19:19.597303 | orchestrator | 2025-08-29 17:19:19.597338 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 17:19:22.359693 | orchestrator | changed: [testbed-manager] 2025-08-29 17:19:22.359791 | orchestrator | 2025-08-29 17:19:22.359806 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 17:19:22.424988 | orchestrator | ok: [testbed-manager] 2025-08-29 17:19:22.425022 | orchestrator | 2025-08-29 17:19:22.425035 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 17:19:22.425046 | orchestrator | 2025-08-29 17:19:22.425088 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 17:19:22.476703 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:19:22.476765 | orchestrator | 2025-08-29 17:19:22.476779 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 17:20:22.563836 | orchestrator | Pausing for 60 seconds 2025-08-29 17:20:22.563948 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:22.563965 | orchestrator | 2025-08-29 17:20:22.563979 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 17:20:26.777394 | orchestrator | changed: [testbed-manager] 2025-08-29 17:20:26.777501 | orchestrator | 2025-08-29 17:20:26.777517 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 17:21:08.517102 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 17:21:08.517192 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 17:21:08.517208 | orchestrator | changed: [testbed-manager] 2025-08-29 17:21:08.517221 | orchestrator | 2025-08-29 17:21:08.517250 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 17:21:18.788723 | orchestrator | changed: [testbed-manager] 2025-08-29 17:21:18.788819 | orchestrator | 2025-08-29 17:21:18.788834 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 17:21:18.868061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 17:21:18.868096 | orchestrator | 2025-08-29 17:21:18.868109 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 17:21:18.868121 | orchestrator | 2025-08-29 17:21:18.868133 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 17:21:18.920628 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:21:18.920694 | orchestrator | 2025-08-29 17:21:18.920707 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:21:18.920719 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 17:21:18.920731 | orchestrator | 2025-08-29 17:21:19.039190 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 17:21:19.039266 | orchestrator | + deactivate 2025-08-29 17:21:19.039285 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 17:21:19.039346 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 17:21:19.039360 | orchestrator | + export PATH 2025-08-29 17:21:19.039372 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 17:21:19.039383 | orchestrator | + '[' -n '' ']' 2025-08-29 17:21:19.039395 | orchestrator | + hash -r 2025-08-29 17:21:19.039406 | orchestrator | + '[' -n '' ']' 2025-08-29 17:21:19.039417 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 17:21:19.039428 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 17:21:19.039439 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 17:21:19.039450 | orchestrator | + unset -f deactivate 2025-08-29 17:21:19.039462 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 17:21:19.046583 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 17:21:19.046627 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 17:21:19.046639 | orchestrator | + local max_attempts=60 2025-08-29 17:21:19.046649 | orchestrator | + local name=ceph-ansible 2025-08-29 17:21:19.046659 | orchestrator | + local attempt_num=1 2025-08-29 17:21:19.047317 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:21:19.081777 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:21:19.081831 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 17:21:19.081843 | orchestrator | + local max_attempts=60 2025-08-29 17:21:19.081855 | orchestrator | + local name=kolla-ansible 2025-08-29 17:21:19.081866 | orchestrator | + local attempt_num=1 2025-08-29 17:21:19.082453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 17:21:19.114471 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:21:19.114534 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 17:21:19.114605 | orchestrator | + local max_attempts=60 2025-08-29 17:21:19.114632 | orchestrator | + local name=osism-ansible 2025-08-29 17:21:19.114661 | orchestrator | + local attempt_num=1 2025-08-29 17:21:19.114935 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 17:21:19.147889 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:21:19.147960 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 17:21:19.147977 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 17:21:19.885185 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 17:21:20.095924 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 17:21:20.096005 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096017 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096025 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 17:21:20.096033 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-08-29 17:21:20.096040 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096047 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096054 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-08-29 17:21:20.096061 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096067 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-08-29 17:21:20.096074 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096081 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-08-29 17:21:20.096088 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096094 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.096101 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-08-29 17:21:20.103653 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 17:21:20.145224 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 17:21:20.145337 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 17:21:20.147799 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 17:21:32.458554 | orchestrator | 2025-08-29 17:21:32 | INFO  | Task f5734bf3-bd4c-4231-8db5-2de04c4db396 (resolvconf) was prepared for execution. 2025-08-29 17:21:32.458697 | orchestrator | 2025-08-29 17:21:32 | INFO  | It takes a moment until task f5734bf3-bd4c-4231-8db5-2de04c4db396 (resolvconf) has been started and output is visible here. 2025-08-29 17:21:47.642416 | orchestrator | 2025-08-29 17:21:47.642523 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 17:21:47.642540 | orchestrator | 2025-08-29 17:21:47.642552 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:21:47.642564 | orchestrator | Friday 29 August 2025 17:21:36 +0000 (0:00:00.156) 0:00:00.156 ********* 2025-08-29 17:21:47.642575 | orchestrator | ok: [testbed-manager] 2025-08-29 17:21:47.642587 | orchestrator | 2025-08-29 17:21:47.642598 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 17:21:47.642610 | orchestrator | Friday 29 August 2025 17:21:41 +0000 (0:00:04.926) 0:00:05.083 ********* 2025-08-29 17:21:47.642621 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:21:47.642632 | orchestrator | 2025-08-29 17:21:47.642643 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 17:21:47.642654 | orchestrator | Friday 29 August 2025 17:21:41 +0000 (0:00:00.069) 0:00:05.153 ********* 2025-08-29 17:21:47.642665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 17:21:47.642677 | orchestrator | 2025-08-29 17:21:47.642687 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 17:21:47.642698 | orchestrator | Friday 29 August 2025 17:21:41 +0000 (0:00:00.089) 0:00:05.242 ********* 2025-08-29 17:21:47.642709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:21:47.642720 | orchestrator | 2025-08-29 17:21:47.642731 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 17:21:47.642742 | orchestrator | Friday 29 August 2025 17:21:41 +0000 (0:00:00.086) 0:00:05.328 ********* 2025-08-29 17:21:47.642753 | orchestrator | ok: [testbed-manager] 2025-08-29 17:21:47.642763 | orchestrator | 2025-08-29 17:21:47.642774 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 17:21:47.642785 | orchestrator | Friday 29 August 2025 17:21:42 +0000 (0:00:01.256) 0:00:06.585 ********* 2025-08-29 17:21:47.642796 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:21:47.642807 | orchestrator | 2025-08-29 17:21:47.642818 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 17:21:47.642829 | orchestrator | Friday 29 August 2025 17:21:42 +0000 (0:00:00.065) 0:00:06.651 ********* 2025-08-29 17:21:47.642839 | orchestrator | ok: [testbed-manager] 2025-08-29 17:21:47.642850 | orchestrator | 2025-08-29 17:21:47.642862 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 17:21:47.642873 | orchestrator | Friday 29 August 2025 17:21:43 +0000 (0:00:00.495) 0:00:07.146 ********* 2025-08-29 17:21:47.642884 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:21:47.642896 | orchestrator | 2025-08-29 17:21:47.642909 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 17:21:47.642922 | orchestrator | Friday 29 August 2025 17:21:43 +0000 (0:00:00.107) 0:00:07.254 ********* 2025-08-29 17:21:47.642934 | orchestrator | changed: [testbed-manager] 2025-08-29 17:21:47.642947 | orchestrator | 2025-08-29 17:21:47.642959 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 17:21:47.642971 | orchestrator | Friday 29 August 2025 17:21:44 +0000 (0:00:00.562) 0:00:07.817 ********* 2025-08-29 17:21:47.642983 | orchestrator | changed: [testbed-manager] 2025-08-29 17:21:47.643019 | orchestrator | 2025-08-29 17:21:47.643032 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 17:21:47.643044 | orchestrator | Friday 29 August 2025 17:21:45 +0000 (0:00:01.051) 0:00:08.868 ********* 2025-08-29 17:21:47.643056 | orchestrator | ok: [testbed-manager] 2025-08-29 17:21:47.643068 | orchestrator | 2025-08-29 17:21:47.643080 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 17:21:47.643092 | orchestrator | Friday 29 August 2025 17:21:46 +0000 (0:00:00.981) 0:00:09.849 ********* 2025-08-29 17:21:47.643104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 17:21:47.643117 | orchestrator | 2025-08-29 17:21:47.643129 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 17:21:47.643141 | orchestrator | Friday 29 August 2025 17:21:46 +0000 (0:00:00.083) 0:00:09.933 ********* 2025-08-29 17:21:47.643153 | orchestrator | changed: [testbed-manager] 2025-08-29 17:21:47.643165 | orchestrator | 2025-08-29 17:21:47.643177 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:21:47.643203 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:21:47.643216 | orchestrator | 2025-08-29 17:21:47.643229 | orchestrator | 2025-08-29 17:21:47.643240 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:21:47.643250 | orchestrator | Friday 29 August 2025 17:21:47 +0000 (0:00:01.162) 0:00:11.095 ********* 2025-08-29 17:21:47.643261 | orchestrator | =============================================================================== 2025-08-29 17:21:47.643271 | orchestrator | Gathering Facts --------------------------------------------------------- 4.93s 2025-08-29 17:21:47.643304 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.26s 2025-08-29 17:21:47.643315 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-08-29 17:21:47.643326 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-08-29 17:21:47.643337 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-08-29 17:21:47.643347 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2025-08-29 17:21:47.643376 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-08-29 17:21:47.643387 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.11s 2025-08-29 17:21:47.643398 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-08-29 17:21:47.643409 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-08-29 17:21:47.643420 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-08-29 17:21:47.643431 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-08-29 17:21:47.643441 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-08-29 17:21:47.930821 | orchestrator | + osism apply sshconfig 2025-08-29 17:21:59.891193 | orchestrator | 2025-08-29 17:21:59 | INFO  | Task e4837c5d-60b0-480d-bcb1-94185c665720 (sshconfig) was prepared for execution. 2025-08-29 17:21:59.891353 | orchestrator | 2025-08-29 17:21:59 | INFO  | It takes a moment until task e4837c5d-60b0-480d-bcb1-94185c665720 (sshconfig) has been started and output is visible here. 2025-08-29 17:22:11.913215 | orchestrator | 2025-08-29 17:22:11.913410 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 17:22:11.913431 | orchestrator | 2025-08-29 17:22:11.913444 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 17:22:11.913456 | orchestrator | Friday 29 August 2025 17:22:03 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-08-29 17:22:11.913493 | orchestrator | ok: [testbed-manager] 2025-08-29 17:22:11.913506 | orchestrator | 2025-08-29 17:22:11.913517 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 17:22:11.913529 | orchestrator | Friday 29 August 2025 17:22:04 +0000 (0:00:00.578) 0:00:00.744 ********* 2025-08-29 17:22:11.913540 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:11.913551 | orchestrator | 2025-08-29 17:22:11.913562 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 17:22:11.913573 | orchestrator | Friday 29 August 2025 17:22:04 +0000 (0:00:00.526) 0:00:01.271 ********* 2025-08-29 17:22:11.913584 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 17:22:11.913596 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 17:22:11.913607 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 17:22:11.913618 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 17:22:11.913629 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 17:22:11.913639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 17:22:11.913650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 17:22:11.913661 | orchestrator | 2025-08-29 17:22:11.913672 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 17:22:11.913683 | orchestrator | Friday 29 August 2025 17:22:10 +0000 (0:00:06.075) 0:00:07.346 ********* 2025-08-29 17:22:11.913694 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:11.913704 | orchestrator | 2025-08-29 17:22:11.913715 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 17:22:11.913726 | orchestrator | Friday 29 August 2025 17:22:11 +0000 (0:00:00.070) 0:00:07.417 ********* 2025-08-29 17:22:11.913737 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:11.913750 | orchestrator | 2025-08-29 17:22:11.913762 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:22:11.913775 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:22:11.913788 | orchestrator | 2025-08-29 17:22:11.913800 | orchestrator | 2025-08-29 17:22:11.913812 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:22:11.913825 | orchestrator | Friday 29 August 2025 17:22:11 +0000 (0:00:00.608) 0:00:08.026 ********* 2025-08-29 17:22:11.913855 | orchestrator | =============================================================================== 2025-08-29 17:22:11.913868 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.08s 2025-08-29 17:22:11.913880 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-08-29 17:22:11.913892 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-08-29 17:22:11.913905 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-08-29 17:22:11.913917 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-08-29 17:22:12.193691 | orchestrator | + osism apply known-hosts 2025-08-29 17:22:24.138612 | orchestrator | 2025-08-29 17:22:24 | INFO  | Task 0ac42bac-fa3b-4cef-b9e2-272357e76b44 (known-hosts) was prepared for execution. 2025-08-29 17:22:24.138703 | orchestrator | 2025-08-29 17:22:24 | INFO  | It takes a moment until task 0ac42bac-fa3b-4cef-b9e2-272357e76b44 (known-hosts) has been started and output is visible here. 2025-08-29 17:22:40.696353 | orchestrator | 2025-08-29 17:22:40.696445 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 17:22:40.696461 | orchestrator | 2025-08-29 17:22:40.696473 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 17:22:40.696484 | orchestrator | Friday 29 August 2025 17:22:27 +0000 (0:00:00.154) 0:00:00.154 ********* 2025-08-29 17:22:40.696495 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 17:22:40.696530 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 17:22:40.696542 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 17:22:40.696552 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 17:22:40.696563 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 17:22:40.696573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 17:22:40.696584 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 17:22:40.696594 | orchestrator | 2025-08-29 17:22:40.696605 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 17:22:40.696616 | orchestrator | Friday 29 August 2025 17:22:33 +0000 (0:00:06.057) 0:00:06.212 ********* 2025-08-29 17:22:40.696628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 17:22:40.696640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 17:22:40.696650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 17:22:40.696661 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 17:22:40.696672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 17:22:40.696682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 17:22:40.696693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 17:22:40.696703 | orchestrator | 2025-08-29 17:22:40.696714 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:40.696724 | orchestrator | Friday 29 August 2025 17:22:34 +0000 (0:00:00.208) 0:00:06.421 ********* 2025-08-29 17:22:40.696735 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqyVtKLj2o/nsmsrtMWoVzuu/X83Im2LgQdVEG+0m+3) 2025-08-29 17:22:40.696750 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnSE/PUykO/FIF6f1a/dCKDnCVGqheJnkdqoJEQvAg06iPGcXgIZzsFlCn2Ka490ifuPvau/UDuGw4ACyW0c6LMo49wFINvGggVLIOdoTgMuU5zMYn0PhkZe2wvlSo7U0mgFmcy9BRlL1OV8a+rltWTXgQedpzVyUPXJ26pUZ4200Sc7at9YBRs/7rwGYea5e2u2uJPyLJ1Eg+IWdOTxOvHlcVpZWvtLLeRS3CKCmzGti474WEI62ucfRGCO44VQJ0LoOkgrPePH+NEnE6HajgAx/7WmcjgrC4Ks6gpqVDNo39iNCN/yVTrRNsg1ltKcZ71/gY+3yrNZk4TW1wt22J7hHMEr4065lQVTmqJGjqEapqfm3slz/7Sc0IoNGKpvnN8PhpNAb0XUUpTyqHjI71PdE67n+z7/feX/uRv1D9H7zpJvsHr7ph4Uk2p7G4Yi59u0LoLjetz+Xg98L0VlVYozaEfLGgo148GlA5xSlbrRrx2Co1J3i6bbOVSu6WlVk=) 2025-08-29 17:22:40.696824 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO9JHoBOaW6lZGprm6YWIIDmmxFZKU/CZMrM5+e0Cm+JQoL4KVJ7MbauF2gumYUvgzDP14a3s856tJinRSeC2Kg=) 2025-08-29 17:22:40.696837 | orchestrator | 2025-08-29 17:22:40.696848 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:40.696859 | orchestrator | Friday 29 August 2025 17:22:35 +0000 (0:00:01.332) 0:00:07.753 ********* 2025-08-29 17:22:40.696886 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7CIxoF5U/ugOOv7sVvjsI6WjQBrKYEcCJLjzKPS2sTeclOTUQrPDOfG21jYzbk/79Az4OTblWwPeKSjJMyea0LM1eoRoSdl5c0L4QwoXjFWbvcbZfmPD2KXxd38oHEdvPyE5MXrJFuCAk13Fy5jhttiEPM1U/IgzvPEe1SLbhEnw1YtH5c1Vd8ZNgs2mMXCDwAO5WMZKKYiWEMTtNfSzW4kh/p/WmIo50A1VfldWs++IOI53vXyBUjTHjujLl9SfXSh9ihGtRWgRSINBRFoZCDkN2uJfEY+VKpZtW4AWJc0vb0msxJOt3xZjHKwtiUjYqcNhdkJSojc8SMgP0qaXMTKziSMuJ6vNQr1cJLeU+cFLOmqW5hnb979OJKcLWbX5AJWUMUQSqgtsdg25jw1ziuaTPPbpbaNn6yJhEZXZHsL0BVdCTZMLnQ2GUcMkPaLJOaRjUvVG8re9teSYG1SEbEaNiF2diogj2qeo6eDwZfrj/NnVK42k6zy10mpuCFGc=) 2025-08-29 17:22:40.696909 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFu1+cBA2fWR8w9dHRjbpA11Ju5SpAHmh0DsHROpZObm) 2025-08-29 17:22:40.696922 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKD8qTC+p62vaHW0TQ1GFUji2TeQhCBntmvGIOp5dscIcIpR8OqYrx0L4UdhGUbJVboKqOfRSm4Bnnh0FvNf/SI=) 2025-08-29 17:22:40.696934 | orchestrator | 2025-08-29 17:22:40.696946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:40.696959 | orchestrator | Friday 29 August 2025 17:22:36 +0000 (0:00:01.123) 0:00:08.877 ********* 2025-08-29 17:22:40.696972 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiPUVX6m9AMA/GbVhomXQZLNDNGmWfrv5MQ0RrYQIlw3JT8D05KOc9Cn3K0xi1mFPHO6laj/wfqlZTVGX9yuBMY7rlg6tIzQ0x/nDMJHB2vSWbBs1Q8hmLYMyZN+sSq6EmjBPFTtaql1pmajru0wJdOoLN+4Pqyt8/yaDhJ9/sV8mOM3UYW5Y27FGzYSbbx+pZXuqms00fgvV7EL8MRNBTsus8BUqWaztDXZb9PUaBNnaYl2V3FiFsFhPEmSljywrXGoM+W18eTCO8+HWTaKBwbF6lVrjfMKSfhpQcMvxUXdKMWgkGNzCe71yvSaA6URDZArakanJ7TVTp7+cxmK/dm+FwCmC/An+2R4oeph4bqsgnkY4pPR7jM3wLu2zm/4X5UwSb42JyYuGZFlQbT+AkfbbSFyDAiU/W/OORCXoTG4wKyNrOEk3HX4grglK1DNLVIPMMH08J4ugWGkC7jIjwu2cakvskxtj93IQMKv9XP5CUoDoUbRSCBijjtcPHv4U=) 2025-08-29 17:22:40.696985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBpDewE8D4pD0woDZ+Ot7IFZHqXuWdzW+kLw5Q9I2Dqjo6CPVPkfpxOTQdzoeA4lam97IdehadeSUkXbNndDx4=) 2025-08-29 17:22:40.696998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBxiAJD3i0n/cLKTGWcg+cwakxuX9KzafLsOZL4UnRtx) 2025-08-29 17:22:40.697011 | orchestrator | 2025-08-29 17:22:40.697023 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:40.697035 | orchestrator | Friday 29 August 2025 17:22:37 +0000 (0:00:01.034) 0:00:09.912 ********* 2025-08-29 17:22:40.697048 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICKyqao87OufA0X1tqrVaxWF8PnNbtBdKGn7g6Cc7WLX) 2025-08-29 17:22:40.697061 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHns8a0uUi/qHIg+xnhAWotEtI+ZS1PTjOokzUEHXsfjabSzfZ8QxDqENSZRIrwcE0YtNbgJg84i96BTuw10abmYLPNmt3tXRNaNw2xsRnkE47RhAppbY5YG6Ro0jgTuzh+DsNqiDNkYV1M4t2tOZDcGHi+4mlHDXePv0B0wRTRimvatHFyQuohyM5kEt8lBAAgi4WGTJCaZ2jmoq6/SgbrYAy/E6rmqBCxL+7RQhyjQYq7GyVf5Yx/v3DymP1IXeUsNXsy4c66sJvlKmtTj68lprzp8RDk3nQXf8y7oYPSQ918FDKMqVCTQeX+V/RMUsAqrm0pAZ8hMJPxG7Z/KpW4EtUo2o94rGK8U/MNXnyG9ZDqQHtOtrm3AQc2EEwe1o9D6AbyxIkQYM4DCm2ubNREXWcGPtPCqV7PqWkPif2V9mvajrPn41Qd5xgTUsGxZa2/DE+gP3NA87hlKMa1vjCgPulNhIXGmU+ViS4yvxtpigAtfiPuYbtwo+Uw4f3R2M=) 2025-08-29 17:22:40.697074 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBYCnaa4AXwo8e4WdVUX8UQYeGl0USG5c1kJ50sOrGYO0JLCzkWsFrXoh6DMopWQrvohAhEQK77tEaRex4nEEEo=) 2025-08-29 17:22:40.697087 | orchestrator | 2025-08-29 17:22:40.697099 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:40.697111 | orchestrator | Friday 29 August 2025 17:22:38 +0000 (0:00:01.027) 0:00:10.940 ********* 2025-08-29 17:22:40.697124 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMq+/VN2XkpZEgWeHbOESlV5mTrVhaaGHtZuQaHfQ0JFiryAvK1m/+hkloXJ5q3gLwHl6sgj0HvpHsFCSpET0GnvIYfktvR6ExliaJauX0PQjsIKRWCQwTY7d47Dr1YntrBj2T6j4EePnLm7o/4SELFTB+UfV+JOp6vJzRXPsWtSHfRbE8E2RAUmtuBinV7ha5BxFs0zdzTCVQpHkWC0M+hrgSSBf4at4z+SWP6ih89qsWTyKcD2ep05emdOYh7F+CQ8fOjY+Kj0dRcWKAXUdK3wXVSspPak3y9KIU56DBFvYsb3X3UXJiREVXxXaIEOK60bKMrJ+FMpWtj8znwX0I8+kJ1/G2TFoeSJH7+QtAgZ15FSjYR/9vGGofpsQwf8+/v+GkhyrYYrajfuNf+/PLwkyQTBV9UhX+IU61NIxZvXv6qQ0eMsDCuocsyZYjmiG6NJU3jQtumIlHd3NkSspSUdLeYreiqt1LmASJZ7FgOgg6wQhv81EkSQU+CB1d7bk=) 2025-08-29 17:22:40.697145 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMCYkQzU8vrxan4UO4nINVf3j6uVmfc0K++nIkqDMr6) 2025-08-29 17:22:40.697164 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDlfVU4d8U0RX3jOAIZ6T3av1tWU/i7tVBTsuSG5cBX+/mwZEkpxdwn2GkCujtab6SNIUUTLs4MEagxHZalnf6M=) 2025-08-29 17:22:40.697177 | orchestrator | 2025-08-29 17:22:40.697189 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:40.697201 | orchestrator | Friday 29 August 2025 17:22:39 +0000 (0:00:01.059) 0:00:11.999 ********* 2025-08-29 17:22:40.697227 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVwmpIQmzZJcES0M12D4PTw9mVhAQzlHJvI4cGO+boMa7ipKo6LBY3XN7s+NqH+v0GWLGh/05rzXEp7vQ+6j0jnl4LRl1f03osA07uWzOGAK1C3MTof2OE87Ce20iXCzwqvDqZ8aM8xt1DdzgTRcfYbrwXvdcd7rPYMCGouViFTNqByjK+ojAw1W3XCKJCBMLLdFr2NR+dInuGDdywxOrLUhg/PU4VU+41sW6XO/TGO0Coj24EAQYB7guypD6pkjfOp30bRm9WVipUpWlFmS/Rnk8GRhjfgZRVHlQWKMTBaZG3WRmsUcxq5rXJkErsBRK8+gHhL53W9OeA3G+VKs18XZL/lpaNkj8YCdL+KafofQrAvAjzuflBzan4avvBYXI31VyQILlvOQnaMcOeXyRAF9emC4QXS5tsjwp8TnPo3f7G45O+qaumCkDQng5GgBM0OVUFcLucce11rzZtStNqWDvS+npnx33YCCD05FD5x1m4vceZbtWzbHkaXIoNogU=) 2025-08-29 17:22:51.122801 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMAvkGPJ8OXxYOsmBg6veHnoanDNdZVhSX+1wwideEbp) 2025-08-29 17:22:51.122904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHtGTHANpRmMr+0+eIEcp8U5wZ+SeBmqeopNNubHdOmye65wHvqaYiv2LzmeSfNi6CufNPt9+8OAfZv3KqShhhk=) 2025-08-29 17:22:51.122922 | orchestrator | 2025-08-29 17:22:51.122935 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:51.122948 | orchestrator | Friday 29 August 2025 17:22:40 +0000 (0:00:01.024) 0:00:13.024 ********* 2025-08-29 17:22:51.122959 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAn3VWRpLTPdVP0zrtfpfDwhtDZY41TtLTcGuFKcsuQb) 2025-08-29 17:22:51.122973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC86UrMEVAHl4UTkPqQWkd/SUztjGdRI5kcFzNFXsLcUInL15AY9eetUEKLJswnBg7q4QDE7Tyom2Y76i5cwFkD/aIXii3rZBIUcrLgax9Fpuq8s8CyT+W0vvK1EzflROEnkV9mNHWPSvSyBokatdOnulp6yXavanXyEBo0eIDocSK7cuon6TNJp+Sn0sPuzwd4DvZKJ9W1GHuWh15Z+/IAtLvdB/AAKwJVschMU8jf5dMav/u8QPKlhpwTG7OoW9YB7Nskb0i+5dPSZT90MrIJi5jPko+2UfBoyLR+ebXxEQ7EW8rPh9ull6AMuVh6PzIEHboohGnosr4tqvbLEvFzjPK2CchxE/RhVfN+fTAOcazjFSuGqL2EX9JwyafLR4OmZWjSM8O88xPghZfYueNrgSfWUcYJZ6UQHh21+UBuoj/ivv0R5ZiwopcQSMxn1qXYmcQpxlQIyuXirpR+h0MA96wE+4uOfj99NlRmVwK7NrTwWjG3doT3gjRoI8G6gns=) 2025-08-29 17:22:51.122988 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOWkjcen2R/Buu/KQmdUnTivou3nFv46dpNgNcTK6n3rJ+JJ3XZdFhfJAyLMiAOHhoM7coNxXnpxZ9z7Il0Fmi8=) 2025-08-29 17:22:51.122999 | orchestrator | 2025-08-29 17:22:51.123010 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 17:22:51.123022 | orchestrator | Friday 29 August 2025 17:22:41 +0000 (0:00:01.058) 0:00:14.082 ********* 2025-08-29 17:22:51.123033 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 17:22:51.123045 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 17:22:51.123056 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 17:22:51.123067 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 17:22:51.123078 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 17:22:51.123112 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 17:22:51.123123 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 17:22:51.123134 | orchestrator | 2025-08-29 17:22:51.123145 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 17:22:51.123157 | orchestrator | Friday 29 August 2025 17:22:46 +0000 (0:00:05.192) 0:00:19.275 ********* 2025-08-29 17:22:51.123170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 17:22:51.123183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 17:22:51.123194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 17:22:51.123205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 17:22:51.123216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 17:22:51.123227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 17:22:51.123238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 17:22:51.123249 | orchestrator | 2025-08-29 17:22:51.123260 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:51.123271 | orchestrator | Friday 29 August 2025 17:22:47 +0000 (0:00:00.163) 0:00:19.438 ********* 2025-08-29 17:22:51.123313 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqyVtKLj2o/nsmsrtMWoVzuu/X83Im2LgQdVEG+0m+3) 2025-08-29 17:22:51.123365 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnSE/PUykO/FIF6f1a/dCKDnCVGqheJnkdqoJEQvAg06iPGcXgIZzsFlCn2Ka490ifuPvau/UDuGw4ACyW0c6LMo49wFINvGggVLIOdoTgMuU5zMYn0PhkZe2wvlSo7U0mgFmcy9BRlL1OV8a+rltWTXgQedpzVyUPXJ26pUZ4200Sc7at9YBRs/7rwGYea5e2u2uJPyLJ1Eg+IWdOTxOvHlcVpZWvtLLeRS3CKCmzGti474WEI62ucfRGCO44VQJ0LoOkgrPePH+NEnE6HajgAx/7WmcjgrC4Ks6gpqVDNo39iNCN/yVTrRNsg1ltKcZ71/gY+3yrNZk4TW1wt22J7hHMEr4065lQVTmqJGjqEapqfm3slz/7Sc0IoNGKpvnN8PhpNAb0XUUpTyqHjI71PdE67n+z7/feX/uRv1D9H7zpJvsHr7ph4Uk2p7G4Yi59u0LoLjetz+Xg98L0VlVYozaEfLGgo148GlA5xSlbrRrx2Co1J3i6bbOVSu6WlVk=) 2025-08-29 17:22:51.123381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO9JHoBOaW6lZGprm6YWIIDmmxFZKU/CZMrM5+e0Cm+JQoL4KVJ7MbauF2gumYUvgzDP14a3s856tJinRSeC2Kg=) 2025-08-29 17:22:51.123394 | orchestrator | 2025-08-29 17:22:51.123407 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:51.123420 | orchestrator | Friday 29 August 2025 17:22:48 +0000 (0:00:00.967) 0:00:20.405 ********* 2025-08-29 17:22:51.123432 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKD8qTC+p62vaHW0TQ1GFUji2TeQhCBntmvGIOp5dscIcIpR8OqYrx0L4UdhGUbJVboKqOfRSm4Bnnh0FvNf/SI=) 2025-08-29 17:22:51.123446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7CIxoF5U/ugOOv7sVvjsI6WjQBrKYEcCJLjzKPS2sTeclOTUQrPDOfG21jYzbk/79Az4OTblWwPeKSjJMyea0LM1eoRoSdl5c0L4QwoXjFWbvcbZfmPD2KXxd38oHEdvPyE5MXrJFuCAk13Fy5jhttiEPM1U/IgzvPEe1SLbhEnw1YtH5c1Vd8ZNgs2mMXCDwAO5WMZKKYiWEMTtNfSzW4kh/p/WmIo50A1VfldWs++IOI53vXyBUjTHjujLl9SfXSh9ihGtRWgRSINBRFoZCDkN2uJfEY+VKpZtW4AWJc0vb0msxJOt3xZjHKwtiUjYqcNhdkJSojc8SMgP0qaXMTKziSMuJ6vNQr1cJLeU+cFLOmqW5hnb979OJKcLWbX5AJWUMUQSqgtsdg25jw1ziuaTPPbpbaNn6yJhEZXZHsL0BVdCTZMLnQ2GUcMkPaLJOaRjUvVG8re9teSYG1SEbEaNiF2diogj2qeo6eDwZfrj/NnVK42k6zy10mpuCFGc=) 2025-08-29 17:22:51.123468 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFu1+cBA2fWR8w9dHRjbpA11Ju5SpAHmh0DsHROpZObm) 2025-08-29 17:22:51.123480 | orchestrator | 2025-08-29 17:22:51.123493 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:51.123505 | orchestrator | Friday 29 August 2025 17:22:49 +0000 (0:00:01.005) 0:00:21.411 ********* 2025-08-29 17:22:51.123518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGBpDewE8D4pD0woDZ+Ot7IFZHqXuWdzW+kLw5Q9I2Dqjo6CPVPkfpxOTQdzoeA4lam97IdehadeSUkXbNndDx4=) 2025-08-29 17:22:51.123531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiPUVX6m9AMA/GbVhomXQZLNDNGmWfrv5MQ0RrYQIlw3JT8D05KOc9Cn3K0xi1mFPHO6laj/wfqlZTVGX9yuBMY7rlg6tIzQ0x/nDMJHB2vSWbBs1Q8hmLYMyZN+sSq6EmjBPFTtaql1pmajru0wJdOoLN+4Pqyt8/yaDhJ9/sV8mOM3UYW5Y27FGzYSbbx+pZXuqms00fgvV7EL8MRNBTsus8BUqWaztDXZb9PUaBNnaYl2V3FiFsFhPEmSljywrXGoM+W18eTCO8+HWTaKBwbF6lVrjfMKSfhpQcMvxUXdKMWgkGNzCe71yvSaA6URDZArakanJ7TVTp7+cxmK/dm+FwCmC/An+2R4oeph4bqsgnkY4pPR7jM3wLu2zm/4X5UwSb42JyYuGZFlQbT+AkfbbSFyDAiU/W/OORCXoTG4wKyNrOEk3HX4grglK1DNLVIPMMH08J4ugWGkC7jIjwu2cakvskxtj93IQMKv9XP5CUoDoUbRSCBijjtcPHv4U=) 2025-08-29 17:22:51.123544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBxiAJD3i0n/cLKTGWcg+cwakxuX9KzafLsOZL4UnRtx) 2025-08-29 17:22:51.123556 | orchestrator | 2025-08-29 17:22:51.123568 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:51.123581 | orchestrator | Friday 29 August 2025 17:22:50 +0000 (0:00:01.017) 0:00:22.428 ********* 2025-08-29 17:22:51.123593 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICKyqao87OufA0X1tqrVaxWF8PnNbtBdKGn7g6Cc7WLX) 2025-08-29 17:22:51.123606 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHns8a0uUi/qHIg+xnhAWotEtI+ZS1PTjOokzUEHXsfjabSzfZ8QxDqENSZRIrwcE0YtNbgJg84i96BTuw10abmYLPNmt3tXRNaNw2xsRnkE47RhAppbY5YG6Ro0jgTuzh+DsNqiDNkYV1M4t2tOZDcGHi+4mlHDXePv0B0wRTRimvatHFyQuohyM5kEt8lBAAgi4WGTJCaZ2jmoq6/SgbrYAy/E6rmqBCxL+7RQhyjQYq7GyVf5Yx/v3DymP1IXeUsNXsy4c66sJvlKmtTj68lprzp8RDk3nQXf8y7oYPSQ918FDKMqVCTQeX+V/RMUsAqrm0pAZ8hMJPxG7Z/KpW4EtUo2o94rGK8U/MNXnyG9ZDqQHtOtrm3AQc2EEwe1o9D6AbyxIkQYM4DCm2ubNREXWcGPtPCqV7PqWkPif2V9mvajrPn41Qd5xgTUsGxZa2/DE+gP3NA87hlKMa1vjCgPulNhIXGmU+ViS4yvxtpigAtfiPuYbtwo+Uw4f3R2M=) 2025-08-29 17:22:51.123629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBYCnaa4AXwo8e4WdVUX8UQYeGl0USG5c1kJ50sOrGYO0JLCzkWsFrXoh6DMopWQrvohAhEQK77tEaRex4nEEEo=) 2025-08-29 17:22:54.997635 | orchestrator | 2025-08-29 17:22:54.997727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:54.997742 | orchestrator | Friday 29 August 2025 17:22:51 +0000 (0:00:01.017) 0:00:23.446 ********* 2025-08-29 17:22:54.997754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDlfVU4d8U0RX3jOAIZ6T3av1tWU/i7tVBTsuSG5cBX+/mwZEkpxdwn2GkCujtab6SNIUUTLs4MEagxHZalnf6M=) 2025-08-29 17:22:54.997770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMq+/VN2XkpZEgWeHbOESlV5mTrVhaaGHtZuQaHfQ0JFiryAvK1m/+hkloXJ5q3gLwHl6sgj0HvpHsFCSpET0GnvIYfktvR6ExliaJauX0PQjsIKRWCQwTY7d47Dr1YntrBj2T6j4EePnLm7o/4SELFTB+UfV+JOp6vJzRXPsWtSHfRbE8E2RAUmtuBinV7ha5BxFs0zdzTCVQpHkWC0M+hrgSSBf4at4z+SWP6ih89qsWTyKcD2ep05emdOYh7F+CQ8fOjY+Kj0dRcWKAXUdK3wXVSspPak3y9KIU56DBFvYsb3X3UXJiREVXxXaIEOK60bKMrJ+FMpWtj8znwX0I8+kJ1/G2TFoeSJH7+QtAgZ15FSjYR/9vGGofpsQwf8+/v+GkhyrYYrajfuNf+/PLwkyQTBV9UhX+IU61NIxZvXv6qQ0eMsDCuocsyZYjmiG6NJU3jQtumIlHd3NkSspSUdLeYreiqt1LmASJZ7FgOgg6wQhv81EkSQU+CB1d7bk=) 2025-08-29 17:22:54.997808 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFMCYkQzU8vrxan4UO4nINVf3j6uVmfc0K++nIkqDMr6) 2025-08-29 17:22:54.997821 | orchestrator | 2025-08-29 17:22:54.997833 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:54.997844 | orchestrator | Friday 29 August 2025 17:22:52 +0000 (0:00:01.011) 0:00:24.458 ********* 2025-08-29 17:22:54.997855 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVwmpIQmzZJcES0M12D4PTw9mVhAQzlHJvI4cGO+boMa7ipKo6LBY3XN7s+NqH+v0GWLGh/05rzXEp7vQ+6j0jnl4LRl1f03osA07uWzOGAK1C3MTof2OE87Ce20iXCzwqvDqZ8aM8xt1DdzgTRcfYbrwXvdcd7rPYMCGouViFTNqByjK+ojAw1W3XCKJCBMLLdFr2NR+dInuGDdywxOrLUhg/PU4VU+41sW6XO/TGO0Coj24EAQYB7guypD6pkjfOp30bRm9WVipUpWlFmS/Rnk8GRhjfgZRVHlQWKMTBaZG3WRmsUcxq5rXJkErsBRK8+gHhL53W9OeA3G+VKs18XZL/lpaNkj8YCdL+KafofQrAvAjzuflBzan4avvBYXI31VyQILlvOQnaMcOeXyRAF9emC4QXS5tsjwp8TnPo3f7G45O+qaumCkDQng5GgBM0OVUFcLucce11rzZtStNqWDvS+npnx33YCCD05FD5x1m4vceZbtWzbHkaXIoNogU=) 2025-08-29 17:22:54.997883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHtGTHANpRmMr+0+eIEcp8U5wZ+SeBmqeopNNubHdOmye65wHvqaYiv2LzmeSfNi6CufNPt9+8OAfZv3KqShhhk=) 2025-08-29 17:22:54.997895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMAvkGPJ8OXxYOsmBg6veHnoanDNdZVhSX+1wwideEbp) 2025-08-29 17:22:54.997906 | orchestrator | 2025-08-29 17:22:54.997917 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 17:22:54.997929 | orchestrator | Friday 29 August 2025 17:22:53 +0000 (0:00:00.998) 0:00:25.457 ********* 2025-08-29 17:22:54.997940 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC86UrMEVAHl4UTkPqQWkd/SUztjGdRI5kcFzNFXsLcUInL15AY9eetUEKLJswnBg7q4QDE7Tyom2Y76i5cwFkD/aIXii3rZBIUcrLgax9Fpuq8s8CyT+W0vvK1EzflROEnkV9mNHWPSvSyBokatdOnulp6yXavanXyEBo0eIDocSK7cuon6TNJp+Sn0sPuzwd4DvZKJ9W1GHuWh15Z+/IAtLvdB/AAKwJVschMU8jf5dMav/u8QPKlhpwTG7OoW9YB7Nskb0i+5dPSZT90MrIJi5jPko+2UfBoyLR+ebXxEQ7EW8rPh9ull6AMuVh6PzIEHboohGnosr4tqvbLEvFzjPK2CchxE/RhVfN+fTAOcazjFSuGqL2EX9JwyafLR4OmZWjSM8O88xPghZfYueNrgSfWUcYJZ6UQHh21+UBuoj/ivv0R5ZiwopcQSMxn1qXYmcQpxlQIyuXirpR+h0MA96wE+4uOfj99NlRmVwK7NrTwWjG3doT3gjRoI8G6gns=) 2025-08-29 17:22:54.997951 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOWkjcen2R/Buu/KQmdUnTivou3nFv46dpNgNcTK6n3rJ+JJ3XZdFhfJAyLMiAOHhoM7coNxXnpxZ9z7Il0Fmi8=) 2025-08-29 17:22:54.997963 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAn3VWRpLTPdVP0zrtfpfDwhtDZY41TtLTcGuFKcsuQb) 2025-08-29 17:22:54.997974 | orchestrator | 2025-08-29 17:22:54.997985 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 17:22:54.997996 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.955) 0:00:26.412 ********* 2025-08-29 17:22:54.998008 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 17:22:54.998064 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 17:22:54.998076 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 17:22:54.998092 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 17:22:54.998103 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 17:22:54.998114 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 17:22:54.998125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 17:22:54.998136 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:54.998147 | orchestrator | 2025-08-29 17:22:54.998173 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 17:22:54.998193 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.147) 0:00:26.560 ********* 2025-08-29 17:22:54.998204 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:54.998215 | orchestrator | 2025-08-29 17:22:54.998226 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 17:22:54.998237 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.055) 0:00:26.615 ********* 2025-08-29 17:22:54.998248 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:22:54.998258 | orchestrator | 2025-08-29 17:22:54.998269 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 17:22:54.998305 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.051) 0:00:26.667 ********* 2025-08-29 17:22:54.998317 | orchestrator | changed: [testbed-manager] 2025-08-29 17:22:54.998327 | orchestrator | 2025-08-29 17:22:54.998338 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:22:54.998349 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:22:54.998361 | orchestrator | 2025-08-29 17:22:54.998372 | orchestrator | 2025-08-29 17:22:54.998383 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:22:54.998393 | orchestrator | Friday 29 August 2025 17:22:54 +0000 (0:00:00.475) 0:00:27.142 ********* 2025-08-29 17:22:54.998404 | orchestrator | =============================================================================== 2025-08-29 17:22:54.998415 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2025-08-29 17:22:54.998425 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2025-08-29 17:22:54.998437 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2025-08-29 17:22:54.998447 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 17:22:54.998458 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-08-29 17:22:54.998469 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-08-29 17:22:54.998480 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 17:22:54.998491 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 17:22:54.998501 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-08-29 17:22:54.998512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 17:22:54.998523 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-08-29 17:22:54.998534 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-08-29 17:22:54.998545 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-08-29 17:22:54.998555 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-08-29 17:22:54.998566 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-08-29 17:22:54.998577 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-08-29 17:22:54.998588 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-08-29 17:22:54.998598 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.21s 2025-08-29 17:22:54.998610 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-08-29 17:22:54.998621 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-08-29 17:22:55.205443 | orchestrator | + osism apply squid 2025-08-29 17:23:06.937852 | orchestrator | 2025-08-29 17:23:06 | INFO  | Task dcc2b715-8635-41e3-b559-9f6115d7bcb5 (squid) was prepared for execution. 2025-08-29 17:23:06.937953 | orchestrator | 2025-08-29 17:23:06 | INFO  | It takes a moment until task dcc2b715-8635-41e3-b559-9f6115d7bcb5 (squid) has been started and output is visible here. 2025-08-29 17:25:02.227827 | orchestrator | 2025-08-29 17:25:02.227937 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 17:25:02.227954 | orchestrator | 2025-08-29 17:25:02.227967 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 17:25:02.227979 | orchestrator | Friday 29 August 2025 17:23:11 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-08-29 17:25:02.227991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:25:02.228003 | orchestrator | 2025-08-29 17:25:02.228014 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 17:25:02.228025 | orchestrator | Friday 29 August 2025 17:23:11 +0000 (0:00:00.125) 0:00:00.296 ********* 2025-08-29 17:25:02.228037 | orchestrator | ok: [testbed-manager] 2025-08-29 17:25:02.228049 | orchestrator | 2025-08-29 17:25:02.228060 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 17:25:02.228071 | orchestrator | Friday 29 August 2025 17:23:12 +0000 (0:00:01.631) 0:00:01.928 ********* 2025-08-29 17:25:02.228083 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 17:25:02.228094 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 17:25:02.228105 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 17:25:02.228116 | orchestrator | 2025-08-29 17:25:02.228127 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 17:25:02.228138 | orchestrator | Friday 29 August 2025 17:23:13 +0000 (0:00:01.179) 0:00:03.108 ********* 2025-08-29 17:25:02.228149 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 17:25:02.228161 | orchestrator | 2025-08-29 17:25:02.228171 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 17:25:02.228183 | orchestrator | Friday 29 August 2025 17:23:15 +0000 (0:00:01.091) 0:00:04.199 ********* 2025-08-29 17:25:02.228194 | orchestrator | ok: [testbed-manager] 2025-08-29 17:25:02.228205 | orchestrator | 2025-08-29 17:25:02.228216 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 17:25:02.228238 | orchestrator | Friday 29 August 2025 17:23:15 +0000 (0:00:00.379) 0:00:04.578 ********* 2025-08-29 17:25:02.228249 | orchestrator | changed: [testbed-manager] 2025-08-29 17:25:02.228261 | orchestrator | 2025-08-29 17:25:02.228272 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 17:25:02.228283 | orchestrator | Friday 29 August 2025 17:23:16 +0000 (0:00:00.951) 0:00:05.529 ********* 2025-08-29 17:25:02.228337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 17:25:02.228349 | orchestrator | ok: [testbed-manager] 2025-08-29 17:25:02.228361 | orchestrator | 2025-08-29 17:25:02.228373 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 17:25:02.228385 | orchestrator | Friday 29 August 2025 17:23:48 +0000 (0:00:32.494) 0:00:38.024 ********* 2025-08-29 17:25:02.228398 | orchestrator | changed: [testbed-manager] 2025-08-29 17:25:02.228411 | orchestrator | 2025-08-29 17:25:02.228423 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 17:25:02.228436 | orchestrator | Friday 29 August 2025 17:24:01 +0000 (0:00:12.335) 0:00:50.359 ********* 2025-08-29 17:25:02.228448 | orchestrator | Pausing for 60 seconds 2025-08-29 17:25:02.228461 | orchestrator | changed: [testbed-manager] 2025-08-29 17:25:02.228474 | orchestrator | 2025-08-29 17:25:02.228486 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 17:25:02.228500 | orchestrator | Friday 29 August 2025 17:25:01 +0000 (0:01:00.085) 0:01:50.445 ********* 2025-08-29 17:25:02.228512 | orchestrator | ok: [testbed-manager] 2025-08-29 17:25:02.228524 | orchestrator | 2025-08-29 17:25:02.228537 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 17:25:02.228574 | orchestrator | Friday 29 August 2025 17:25:01 +0000 (0:00:00.057) 0:01:50.502 ********* 2025-08-29 17:25:02.228593 | orchestrator | changed: [testbed-manager] 2025-08-29 17:25:02.228611 | orchestrator | 2025-08-29 17:25:02.228627 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:25:02.228644 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:25:02.228660 | orchestrator | 2025-08-29 17:25:02.228677 | orchestrator | 2025-08-29 17:25:02.228694 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:25:02.228712 | orchestrator | Friday 29 August 2025 17:25:01 +0000 (0:00:00.608) 0:01:51.110 ********* 2025-08-29 17:25:02.228731 | orchestrator | =============================================================================== 2025-08-29 17:25:02.228750 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-08-29 17:25:02.228769 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.49s 2025-08-29 17:25:02.228784 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.34s 2025-08-29 17:25:02.228795 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.63s 2025-08-29 17:25:02.228806 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-08-29 17:25:02.228816 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2025-08-29 17:25:02.228827 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-08-29 17:25:02.228839 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-08-29 17:25:02.228850 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-08-29 17:25:02.228861 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.13s 2025-08-29 17:25:02.228872 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-08-29 17:25:02.517808 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-08-29 17:25:02.517900 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-08-29 17:25:02.520958 | orchestrator | ++ semver 9.2.0 9.0.0 2025-08-29 17:25:02.579244 | orchestrator | + [[ 1 -lt 0 ]] 2025-08-29 17:25:02.579905 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 17:25:14.577578 | orchestrator | 2025-08-29 17:25:14 | INFO  | Task 58567415-0d2b-4058-9a71-06f87ca429f7 (operator) was prepared for execution. 2025-08-29 17:25:14.577679 | orchestrator | 2025-08-29 17:25:14 | INFO  | It takes a moment until task 58567415-0d2b-4058-9a71-06f87ca429f7 (operator) has been started and output is visible here. 2025-08-29 17:25:32.125184 | orchestrator | 2025-08-29 17:25:32.125289 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 17:25:32.125335 | orchestrator | 2025-08-29 17:25:32.125349 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 17:25:32.125360 | orchestrator | Friday 29 August 2025 17:25:18 +0000 (0:00:00.158) 0:00:00.158 ********* 2025-08-29 17:25:32.125372 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:32.125384 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:32.125395 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:25:32.125406 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:25:32.125417 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:25:32.125428 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:32.125439 | orchestrator | 2025-08-29 17:25:32.125450 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 17:25:32.125461 | orchestrator | Friday 29 August 2025 17:25:22 +0000 (0:00:03.798) 0:00:03.957 ********* 2025-08-29 17:25:32.125472 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:32.125483 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:32.125494 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:32.125505 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:25:32.125534 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:25:32.125545 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:25:32.125556 | orchestrator | 2025-08-29 17:25:32.125566 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 17:25:32.125577 | orchestrator | 2025-08-29 17:25:32.125588 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 17:25:32.125599 | orchestrator | Friday 29 August 2025 17:25:23 +0000 (0:00:00.920) 0:00:04.878 ********* 2025-08-29 17:25:32.125609 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:32.125620 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:32.125631 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:32.125641 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:25:32.125652 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:25:32.125662 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:25:32.125673 | orchestrator | 2025-08-29 17:25:32.125684 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 17:25:32.125695 | orchestrator | Friday 29 August 2025 17:25:23 +0000 (0:00:00.167) 0:00:05.045 ********* 2025-08-29 17:25:32.125705 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:25:32.125716 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:25:32.125728 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:25:32.125740 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:25:32.125752 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:25:32.125764 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:25:32.125775 | orchestrator | 2025-08-29 17:25:32.125788 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 17:25:32.125801 | orchestrator | Friday 29 August 2025 17:25:23 +0000 (0:00:00.158) 0:00:05.204 ********* 2025-08-29 17:25:32.125814 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:32.125827 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:32.125840 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:32.125852 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:32.125864 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:32.125876 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:32.125889 | orchestrator | 2025-08-29 17:25:32.125902 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 17:25:32.125915 | orchestrator | Friday 29 August 2025 17:25:24 +0000 (0:00:00.653) 0:00:05.858 ********* 2025-08-29 17:25:32.125927 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:32.125940 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:32.125952 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:32.125963 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:32.125976 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:32.125988 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:32.126001 | orchestrator | 2025-08-29 17:25:32.126087 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 17:25:32.126102 | orchestrator | Friday 29 August 2025 17:25:25 +0000 (0:00:00.819) 0:00:06.677 ********* 2025-08-29 17:25:32.126114 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 17:25:32.126125 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 17:25:32.126136 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 17:25:32.126182 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 17:25:32.126193 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 17:25:32.126204 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 17:25:32.126215 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 17:25:32.126225 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 17:25:32.126236 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 17:25:32.126247 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 17:25:32.126257 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 17:25:32.126268 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 17:25:32.126279 | orchestrator | 2025-08-29 17:25:32.126290 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 17:25:32.126325 | orchestrator | Friday 29 August 2025 17:25:26 +0000 (0:00:01.274) 0:00:07.952 ********* 2025-08-29 17:25:32.126337 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:32.126348 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:32.126363 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:32.126374 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:32.126385 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:32.126396 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:32.126407 | orchestrator | 2025-08-29 17:25:32.126418 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 17:25:32.126429 | orchestrator | Friday 29 August 2025 17:25:28 +0000 (0:00:02.295) 0:00:10.248 ********* 2025-08-29 17:25:32.126440 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 17:25:32.126451 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 17:25:32.126462 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 17:25:32.126473 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:25:32.126501 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:25:32.126513 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:25:32.126524 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:25:32.126545 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:25:32.126560 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 17:25:32.126571 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 17:25:32.126582 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 17:25:32.126593 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 17:25:32.126603 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 17:25:32.126614 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 17:25:32.126625 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 17:25:32.126635 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:25:32.126646 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:25:32.126657 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:25:32.126668 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:25:32.126678 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:25:32.126689 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 17:25:32.126700 | orchestrator | 2025-08-29 17:25:32.126711 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 17:25:32.126723 | orchestrator | Friday 29 August 2025 17:25:29 +0000 (0:00:01.197) 0:00:11.445 ********* 2025-08-29 17:25:32.126733 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:32.126744 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:32.126755 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:32.126765 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:25:32.126776 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:25:32.126786 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:25:32.126797 | orchestrator | 2025-08-29 17:25:32.126808 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 17:25:32.126819 | orchestrator | Friday 29 August 2025 17:25:30 +0000 (0:00:00.161) 0:00:11.607 ********* 2025-08-29 17:25:32.126830 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:32.126840 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:32.126851 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:32.126862 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:32.126879 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:32.126890 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:32.126901 | orchestrator | 2025-08-29 17:25:32.126912 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 17:25:32.126923 | orchestrator | Friday 29 August 2025 17:25:30 +0000 (0:00:00.649) 0:00:12.257 ********* 2025-08-29 17:25:32.126934 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:32.126945 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:32.126956 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:32.126966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:25:32.126977 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:25:32.126988 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:25:32.126999 | orchestrator | 2025-08-29 17:25:32.127010 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 17:25:32.127021 | orchestrator | Friday 29 August 2025 17:25:30 +0000 (0:00:00.218) 0:00:12.475 ********* 2025-08-29 17:25:32.127031 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:25:32.127042 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:32.127053 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:25:32.127063 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:32.127074 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:25:32.127085 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:25:32.127095 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:32.127106 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:32.127117 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 17:25:32.127127 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:32.127138 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 17:25:32.127149 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:32.127159 | orchestrator | 2025-08-29 17:25:32.127170 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 17:25:32.127181 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:00.726) 0:00:13.202 ********* 2025-08-29 17:25:32.127192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:32.127203 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:32.127213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:32.127224 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:25:32.127235 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:25:32.127245 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:25:32.127257 | orchestrator | 2025-08-29 17:25:32.127267 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 17:25:32.127278 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:00.152) 0:00:13.354 ********* 2025-08-29 17:25:32.127289 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:32.127337 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:32.127349 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:32.127359 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:25:32.127370 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:25:32.127381 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:25:32.127391 | orchestrator | 2025-08-29 17:25:32.127402 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 17:25:32.127413 | orchestrator | Friday 29 August 2025 17:25:31 +0000 (0:00:00.164) 0:00:13.519 ********* 2025-08-29 17:25:32.127424 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:32.127435 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:32.127446 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:32.127457 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:25:32.127474 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:25:33.318279 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:25:33.318396 | orchestrator | 2025-08-29 17:25:33.318411 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 17:25:33.318424 | orchestrator | Friday 29 August 2025 17:25:32 +0000 (0:00:00.157) 0:00:13.676 ********* 2025-08-29 17:25:33.318464 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:25:33.318492 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:25:33.318503 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:25:33.318514 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:25:33.318525 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:25:33.318536 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:25:33.318546 | orchestrator | 2025-08-29 17:25:33.318557 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 17:25:33.318568 | orchestrator | Friday 29 August 2025 17:25:32 +0000 (0:00:00.630) 0:00:14.307 ********* 2025-08-29 17:25:33.318579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:25:33.318590 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:25:33.318601 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:25:33.318612 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:25:33.318622 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:25:33.318633 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:25:33.318643 | orchestrator | 2025-08-29 17:25:33.318654 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:25:33.318667 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:25:33.318679 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:25:33.318690 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:25:33.318701 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:25:33.318712 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:25:33.318722 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:25:33.318733 | orchestrator | 2025-08-29 17:25:33.318744 | orchestrator | 2025-08-29 17:25:33.318755 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:25:33.318766 | orchestrator | Friday 29 August 2025 17:25:33 +0000 (0:00:00.301) 0:00:14.608 ********* 2025-08-29 17:25:33.318776 | orchestrator | =============================================================================== 2025-08-29 17:25:33.318787 | orchestrator | Gathering Facts --------------------------------------------------------- 3.80s 2025-08-29 17:25:33.318798 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 2.30s 2025-08-29 17:25:33.318808 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.27s 2025-08-29 17:25:33.318820 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2025-08-29 17:25:33.318832 | orchestrator | Do not require tty for all users ---------------------------------------- 0.92s 2025-08-29 17:25:33.318844 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-08-29 17:25:33.318856 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-08-29 17:25:33.318868 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2025-08-29 17:25:33.318881 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.65s 2025-08-29 17:25:33.318893 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-08-29 17:25:33.318906 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2025-08-29 17:25:33.318918 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2025-08-29 17:25:33.318939 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-08-29 17:25:33.318952 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-08-29 17:25:33.318963 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-08-29 17:25:33.318976 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-08-29 17:25:33.318989 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-08-29 17:25:33.319001 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-08-29 17:25:33.612929 | orchestrator | + osism apply --environment custom facts 2025-08-29 17:25:35.438532 | orchestrator | 2025-08-29 17:25:35 | INFO  | Trying to run play facts in environment custom 2025-08-29 17:25:45.556044 | orchestrator | 2025-08-29 17:25:45 | INFO  | Task 01f8881d-45c0-4b02-8f28-ce2472025a86 (facts) was prepared for execution. 2025-08-29 17:25:45.556128 | orchestrator | 2025-08-29 17:25:45 | INFO  | It takes a moment until task 01f8881d-45c0-4b02-8f28-ce2472025a86 (facts) has been started and output is visible here. 2025-08-29 17:26:29.603047 | orchestrator | 2025-08-29 17:26:29.603175 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 17:26:29.603192 | orchestrator | 2025-08-29 17:26:29.603205 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 17:26:29.603216 | orchestrator | Friday 29 August 2025 17:25:49 +0000 (0:00:00.093) 0:00:00.093 ********* 2025-08-29 17:26:29.603228 | orchestrator | ok: [testbed-manager] 2025-08-29 17:26:29.603258 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:26:29.603271 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:26:29.603282 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:26:29.603293 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:26:29.603304 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:26:29.603335 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:26:29.603346 | orchestrator | 2025-08-29 17:26:29.603358 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 17:26:29.603370 | orchestrator | Friday 29 August 2025 17:25:51 +0000 (0:00:01.470) 0:00:01.563 ********* 2025-08-29 17:26:29.603381 | orchestrator | ok: [testbed-manager] 2025-08-29 17:26:29.603392 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:26:29.603403 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:26:29.603414 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:26:29.603425 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:26:29.603436 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:26:29.603447 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:26:29.603458 | orchestrator | 2025-08-29 17:26:29.603469 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 17:26:29.603480 | orchestrator | 2025-08-29 17:26:29.603491 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 17:26:29.603502 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:01.224) 0:00:02.788 ********* 2025-08-29 17:26:29.603513 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.603524 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.603535 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.603546 | orchestrator | 2025-08-29 17:26:29.603557 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 17:26:29.603569 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.127) 0:00:02.916 ********* 2025-08-29 17:26:29.603580 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.603591 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.603604 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.603616 | orchestrator | 2025-08-29 17:26:29.603628 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 17:26:29.603640 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.212) 0:00:03.129 ********* 2025-08-29 17:26:29.603653 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.603690 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.603703 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.603714 | orchestrator | 2025-08-29 17:26:29.603727 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 17:26:29.603739 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.202) 0:00:03.331 ********* 2025-08-29 17:26:29.603752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:26:29.603766 | orchestrator | 2025-08-29 17:26:29.603778 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 17:26:29.603790 | orchestrator | Friday 29 August 2025 17:25:52 +0000 (0:00:00.135) 0:00:03.467 ********* 2025-08-29 17:26:29.603803 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.603815 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.603827 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.603839 | orchestrator | 2025-08-29 17:26:29.603853 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 17:26:29.603865 | orchestrator | Friday 29 August 2025 17:25:53 +0000 (0:00:00.501) 0:00:03.969 ********* 2025-08-29 17:26:29.603877 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:26:29.603889 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:26:29.603901 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:26:29.603913 | orchestrator | 2025-08-29 17:26:29.603926 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 17:26:29.603938 | orchestrator | Friday 29 August 2025 17:25:53 +0000 (0:00:00.146) 0:00:04.115 ********* 2025-08-29 17:26:29.603951 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:26:29.603962 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:26:29.603973 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:26:29.603984 | orchestrator | 2025-08-29 17:26:29.603995 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 17:26:29.604006 | orchestrator | Friday 29 August 2025 17:25:54 +0000 (0:00:01.050) 0:00:05.166 ********* 2025-08-29 17:26:29.604017 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.604027 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.604038 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.604049 | orchestrator | 2025-08-29 17:26:29.604060 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 17:26:29.604070 | orchestrator | Friday 29 August 2025 17:25:55 +0000 (0:00:00.478) 0:00:05.645 ********* 2025-08-29 17:26:29.604081 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:26:29.604092 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:26:29.604103 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:26:29.604114 | orchestrator | 2025-08-29 17:26:29.604125 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 17:26:29.604135 | orchestrator | Friday 29 August 2025 17:25:56 +0000 (0:00:00.963) 0:00:06.608 ********* 2025-08-29 17:26:29.604146 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:26:29.604157 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:26:29.604167 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:26:29.604178 | orchestrator | 2025-08-29 17:26:29.604189 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 17:26:29.604200 | orchestrator | Friday 29 August 2025 17:26:12 +0000 (0:00:16.729) 0:00:23.338 ********* 2025-08-29 17:26:29.604210 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:26:29.604221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:26:29.604232 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:26:29.604243 | orchestrator | 2025-08-29 17:26:29.604254 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 17:26:29.604281 | orchestrator | Friday 29 August 2025 17:26:12 +0000 (0:00:00.106) 0:00:23.444 ********* 2025-08-29 17:26:29.604293 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:26:29.604304 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:26:29.604363 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:26:29.604374 | orchestrator | 2025-08-29 17:26:29.604385 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 17:26:29.604403 | orchestrator | Friday 29 August 2025 17:26:20 +0000 (0:00:07.143) 0:00:30.588 ********* 2025-08-29 17:26:29.604414 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.604425 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.604436 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.604447 | orchestrator | 2025-08-29 17:26:29.604458 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 17:26:29.604468 | orchestrator | Friday 29 August 2025 17:26:20 +0000 (0:00:00.406) 0:00:30.995 ********* 2025-08-29 17:26:29.604479 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 17:26:29.604490 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 17:26:29.604501 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 17:26:29.604512 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 17:26:29.604522 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 17:26:29.604533 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 17:26:29.604544 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 17:26:29.604554 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 17:26:29.604565 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 17:26:29.604576 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 17:26:29.604587 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 17:26:29.604597 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 17:26:29.604608 | orchestrator | 2025-08-29 17:26:29.604619 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 17:26:29.604629 | orchestrator | Friday 29 August 2025 17:26:23 +0000 (0:00:03.276) 0:00:34.272 ********* 2025-08-29 17:26:29.604640 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.604651 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.604662 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.604672 | orchestrator | 2025-08-29 17:26:29.604683 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:26:29.604694 | orchestrator | 2025-08-29 17:26:29.604705 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:26:29.604716 | orchestrator | Friday 29 August 2025 17:26:24 +0000 (0:00:01.097) 0:00:35.369 ********* 2025-08-29 17:26:29.604726 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:26:29.604737 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:26:29.604748 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:26:29.604758 | orchestrator | ok: [testbed-manager] 2025-08-29 17:26:29.604769 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:26:29.604779 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:26:29.604790 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:26:29.604801 | orchestrator | 2025-08-29 17:26:29.604811 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:26:29.604823 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:26:29.604834 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:26:29.604847 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:26:29.604858 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:26:29.604869 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:26:29.604886 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:26:29.604897 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:26:29.604908 | orchestrator | 2025-08-29 17:26:29.604919 | orchestrator | 2025-08-29 17:26:29.604930 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:26:29.604940 | orchestrator | Friday 29 August 2025 17:26:29 +0000 (0:00:04.767) 0:00:40.137 ********* 2025-08-29 17:26:29.604951 | orchestrator | =============================================================================== 2025-08-29 17:26:29.604962 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.73s 2025-08-29 17:26:29.604973 | orchestrator | Install required packages (Debian) -------------------------------------- 7.14s 2025-08-29 17:26:29.604983 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2025-08-29 17:26:29.604994 | orchestrator | Copy fact files --------------------------------------------------------- 3.28s 2025-08-29 17:26:29.605005 | orchestrator | Create custom facts directory ------------------------------------------- 1.47s 2025-08-29 17:26:29.605016 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2025-08-29 17:26:29.605033 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.10s 2025-08-29 17:26:29.833129 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-08-29 17:26:29.833232 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.96s 2025-08-29 17:26:29.833246 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.50s 2025-08-29 17:26:29.833258 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-08-29 17:26:29.833269 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-08-29 17:26:29.833280 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-08-29 17:26:29.833290 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-08-29 17:26:29.833301 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2025-08-29 17:26:29.833370 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-08-29 17:26:29.833383 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-08-29 17:26:29.833394 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-08-29 17:26:30.120434 | orchestrator | + osism apply bootstrap 2025-08-29 17:26:41.921433 | orchestrator | 2025-08-29 17:26:41 | INFO  | Task 11c20d64-2e0d-4b5b-8b08-5164dfbcc868 (bootstrap) was prepared for execution. 2025-08-29 17:26:41.921541 | orchestrator | 2025-08-29 17:26:41 | INFO  | It takes a moment until task 11c20d64-2e0d-4b5b-8b08-5164dfbcc868 (bootstrap) has been started and output is visible here. 2025-08-29 17:27:00.366437 | orchestrator | 2025-08-29 17:27:00.366563 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 17:27:00.366580 | orchestrator | 2025-08-29 17:27:00.366610 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 17:27:00.366623 | orchestrator | Friday 29 August 2025 17:26:46 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-08-29 17:27:00.366634 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:00.366646 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:00.366657 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:00.366668 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:00.366678 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:00.366689 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:00.366700 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:00.366733 | orchestrator | 2025-08-29 17:27:00.366746 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:27:00.366757 | orchestrator | 2025-08-29 17:27:00.366768 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:27:00.366779 | orchestrator | Friday 29 August 2025 17:26:46 +0000 (0:00:00.256) 0:00:00.429 ********* 2025-08-29 17:27:00.366790 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:00.366801 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:00.366811 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:00.366822 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:00.366833 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:00.366844 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:00.366855 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:00.366866 | orchestrator | 2025-08-29 17:27:00.366877 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 17:27:00.366887 | orchestrator | 2025-08-29 17:27:00.366898 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:27:00.366909 | orchestrator | Friday 29 August 2025 17:26:51 +0000 (0:00:04.848) 0:00:05.277 ********* 2025-08-29 17:27:00.366920 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 17:27:00.366932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 17:27:00.366942 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 17:27:00.366953 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 17:27:00.366964 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:27:00.366975 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 17:27:00.366985 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 17:27:00.366996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:27:00.367007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 17:27:00.367017 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 17:27:00.367028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:27:00.367039 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 17:27:00.367050 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 17:27:00.367061 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 17:27:00.367071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 17:27:00.367082 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 17:27:00.367093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 17:27:00.367103 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 17:27:00.367114 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 17:27:00.367125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 17:27:00.367136 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 17:27:00.367147 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:00.367157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 17:27:00.367168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 17:27:00.367179 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:00.367190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 17:27:00.367200 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 17:27:00.367211 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 17:27:00.367222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 17:27:00.367232 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 17:27:00.367248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 17:27:00.367268 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:00.367279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 17:27:00.367290 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 17:27:00.367301 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 17:27:00.367331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 17:27:00.367343 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 17:27:00.367354 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 17:27:00.367365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:00.367376 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 17:27:00.367387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 17:27:00.367397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 17:27:00.367408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 17:27:00.367418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 17:27:00.367429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:27:00.367440 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 17:27:00.367467 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 17:27:00.367479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 17:27:00.367489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 17:27:00.367500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:27:00.367510 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 17:27:00.367521 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 17:27:00.367532 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:00.367542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:27:00.367553 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:00.367564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:00.367574 | orchestrator | 2025-08-29 17:27:00.367585 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 17:27:00.367596 | orchestrator | 2025-08-29 17:27:00.367607 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 17:27:00.367618 | orchestrator | Friday 29 August 2025 17:26:51 +0000 (0:00:00.436) 0:00:05.714 ********* 2025-08-29 17:27:00.367629 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:00.367640 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:00.367650 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:00.367661 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:00.367672 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:00.367682 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:00.367693 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:00.367704 | orchestrator | 2025-08-29 17:27:00.367714 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 17:27:00.367725 | orchestrator | Friday 29 August 2025 17:26:53 +0000 (0:00:02.158) 0:00:07.873 ********* 2025-08-29 17:27:00.367736 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:00.367747 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:00.367758 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:00.367768 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:00.367779 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:00.367789 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:00.367800 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:00.367811 | orchestrator | 2025-08-29 17:27:00.367822 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 17:27:00.367833 | orchestrator | Friday 29 August 2025 17:26:55 +0000 (0:00:01.289) 0:00:09.162 ********* 2025-08-29 17:27:00.367844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:00.367864 | orchestrator | 2025-08-29 17:27:00.367875 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 17:27:00.367887 | orchestrator | Friday 29 August 2025 17:26:55 +0000 (0:00:00.306) 0:00:09.468 ********* 2025-08-29 17:27:00.367897 | orchestrator | changed: [testbed-manager] 2025-08-29 17:27:00.367908 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:00.367919 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:00.367930 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:00.367941 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:00.367951 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:00.367962 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:00.367972 | orchestrator | 2025-08-29 17:27:00.367983 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 17:27:00.367994 | orchestrator | Friday 29 August 2025 17:26:57 +0000 (0:00:02.291) 0:00:11.760 ********* 2025-08-29 17:27:00.368005 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:00.368017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:00.368030 | orchestrator | 2025-08-29 17:27:00.368041 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 17:27:00.368052 | orchestrator | Friday 29 August 2025 17:26:58 +0000 (0:00:00.332) 0:00:12.092 ********* 2025-08-29 17:27:00.368062 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:00.368073 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:00.368084 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:00.368095 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:00.368105 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:00.368116 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:00.368127 | orchestrator | 2025-08-29 17:27:00.368138 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 17:27:00.368149 | orchestrator | Friday 29 August 2025 17:26:59 +0000 (0:00:01.121) 0:00:13.214 ********* 2025-08-29 17:27:00.368160 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:00.368170 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:00.368181 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:00.368192 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:00.368203 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:00.368213 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:00.368224 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:00.368235 | orchestrator | 2025-08-29 17:27:00.368246 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 17:27:00.368256 | orchestrator | Friday 29 August 2025 17:26:59 +0000 (0:00:00.600) 0:00:13.815 ********* 2025-08-29 17:27:00.368267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:00.368278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:00.368289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:00.368300 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:00.368331 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:00.368343 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:00.368353 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:00.368364 | orchestrator | 2025-08-29 17:27:00.368375 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 17:27:00.368387 | orchestrator | Friday 29 August 2025 17:27:00 +0000 (0:00:00.468) 0:00:14.283 ********* 2025-08-29 17:27:00.368398 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:00.368409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:00.368426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:13.193116 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:13.193220 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:13.193236 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:13.193270 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:13.193282 | orchestrator | 2025-08-29 17:27:13.193295 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 17:27:13.193307 | orchestrator | Friday 29 August 2025 17:27:00 +0000 (0:00:00.230) 0:00:14.513 ********* 2025-08-29 17:27:13.193347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:13.193378 | orchestrator | 2025-08-29 17:27:13.193390 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 17:27:13.193402 | orchestrator | Friday 29 August 2025 17:27:00 +0000 (0:00:00.287) 0:00:14.801 ********* 2025-08-29 17:27:13.193413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:13.193424 | orchestrator | 2025-08-29 17:27:13.193435 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 17:27:13.193445 | orchestrator | Friday 29 August 2025 17:27:01 +0000 (0:00:00.319) 0:00:15.120 ********* 2025-08-29 17:27:13.193456 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.193467 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.193478 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.193488 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.193499 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.193510 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.193521 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.193532 | orchestrator | 2025-08-29 17:27:13.193543 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 17:27:13.193554 | orchestrator | Friday 29 August 2025 17:27:02 +0000 (0:00:01.566) 0:00:16.686 ********* 2025-08-29 17:27:13.193564 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:13.193575 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:13.193586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:13.193596 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:13.193607 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:13.193617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:13.193628 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:13.193640 | orchestrator | 2025-08-29 17:27:13.193652 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 17:27:13.193664 | orchestrator | Friday 29 August 2025 17:27:02 +0000 (0:00:00.217) 0:00:16.904 ********* 2025-08-29 17:27:13.193677 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.193689 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.193700 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.193712 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.193723 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.193735 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.193746 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.193758 | orchestrator | 2025-08-29 17:27:13.193771 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 17:27:13.193783 | orchestrator | Friday 29 August 2025 17:27:03 +0000 (0:00:00.554) 0:00:17.459 ********* 2025-08-29 17:27:13.193840 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:13.193853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:13.193866 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:13.193878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:13.193890 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:13.193902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:13.193914 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:13.193926 | orchestrator | 2025-08-29 17:27:13.193938 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 17:27:13.193960 | orchestrator | Friday 29 August 2025 17:27:03 +0000 (0:00:00.244) 0:00:17.703 ********* 2025-08-29 17:27:13.193971 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.193983 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:13.193993 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:13.194004 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:13.194068 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:13.194081 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:13.194097 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:13.194108 | orchestrator | 2025-08-29 17:27:13.194119 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 17:27:13.194130 | orchestrator | Friday 29 August 2025 17:27:04 +0000 (0:00:00.589) 0:00:18.293 ********* 2025-08-29 17:27:13.194140 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.194151 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:13.194197 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:13.194208 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:13.194219 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:13.194230 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:13.194240 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:13.194251 | orchestrator | 2025-08-29 17:27:13.194262 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 17:27:13.194273 | orchestrator | Friday 29 August 2025 17:27:05 +0000 (0:00:01.168) 0:00:19.461 ********* 2025-08-29 17:27:13.194283 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.194294 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.194305 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.194334 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.194346 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.194357 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.194367 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.194378 | orchestrator | 2025-08-29 17:27:13.194389 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 17:27:13.194400 | orchestrator | Friday 29 August 2025 17:27:06 +0000 (0:00:01.184) 0:00:20.646 ********* 2025-08-29 17:27:13.194431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:13.194443 | orchestrator | 2025-08-29 17:27:13.194454 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 17:27:13.194465 | orchestrator | Friday 29 August 2025 17:27:06 +0000 (0:00:00.342) 0:00:20.989 ********* 2025-08-29 17:27:13.194475 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:13.194486 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:13.194497 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:13.194507 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:13.194518 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:13.194529 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:13.194539 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:13.194550 | orchestrator | 2025-08-29 17:27:13.194560 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 17:27:13.194571 | orchestrator | Friday 29 August 2025 17:27:08 +0000 (0:00:01.540) 0:00:22.529 ********* 2025-08-29 17:27:13.194582 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.194592 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.194603 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.194613 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.194624 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.194635 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.194645 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.194656 | orchestrator | 2025-08-29 17:27:13.194667 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 17:27:13.194678 | orchestrator | Friday 29 August 2025 17:27:08 +0000 (0:00:00.240) 0:00:22.769 ********* 2025-08-29 17:27:13.194696 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.194707 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.194718 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.194728 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.194739 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.194750 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.194760 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.194771 | orchestrator | 2025-08-29 17:27:13.194782 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 17:27:13.194793 | orchestrator | Friday 29 August 2025 17:27:08 +0000 (0:00:00.280) 0:00:23.050 ********* 2025-08-29 17:27:13.194803 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.194814 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.194824 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.194835 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.194846 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.194856 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.194867 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.194878 | orchestrator | 2025-08-29 17:27:13.194889 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 17:27:13.194899 | orchestrator | Friday 29 August 2025 17:27:09 +0000 (0:00:00.242) 0:00:23.293 ********* 2025-08-29 17:27:13.194911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:13.194923 | orchestrator | 2025-08-29 17:27:13.194934 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 17:27:13.194945 | orchestrator | Friday 29 August 2025 17:27:09 +0000 (0:00:00.320) 0:00:23.613 ********* 2025-08-29 17:27:13.194956 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.194966 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.194977 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.194988 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.194998 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.195009 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.195019 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.195030 | orchestrator | 2025-08-29 17:27:13.195040 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 17:27:13.195051 | orchestrator | Friday 29 August 2025 17:27:10 +0000 (0:00:00.544) 0:00:24.158 ********* 2025-08-29 17:27:13.195062 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:13.195073 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:13.195083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:13.195094 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:13.195104 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:13.195115 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:13.195125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:13.195136 | orchestrator | 2025-08-29 17:27:13.195152 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 17:27:13.195163 | orchestrator | Friday 29 August 2025 17:27:10 +0000 (0:00:00.296) 0:00:24.454 ********* 2025-08-29 17:27:13.195174 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.195185 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:13.195195 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.195206 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:13.195217 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:13.195227 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.195238 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.195248 | orchestrator | 2025-08-29 17:27:13.195259 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 17:27:13.195270 | orchestrator | Friday 29 August 2025 17:27:11 +0000 (0:00:01.063) 0:00:25.518 ********* 2025-08-29 17:27:13.195280 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.195297 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:13.195307 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:13.195361 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.195372 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:13.195383 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:13.195394 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.195404 | orchestrator | 2025-08-29 17:27:13.195415 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 17:27:13.195426 | orchestrator | Friday 29 August 2025 17:27:12 +0000 (0:00:00.623) 0:00:26.142 ********* 2025-08-29 17:27:13.195437 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:13.195447 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:13.195458 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:13.195469 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:13.195487 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.057213 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:57.057306 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:57.057368 | orchestrator | 2025-08-29 17:27:57.057382 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 17:27:57.057394 | orchestrator | Friday 29 August 2025 17:27:13 +0000 (0:00:01.087) 0:00:27.229 ********* 2025-08-29 17:27:57.057406 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.057418 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.057429 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.057440 | orchestrator | changed: [testbed-manager] 2025-08-29 17:27:57.057451 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:57.057462 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:57.057473 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:57.057485 | orchestrator | 2025-08-29 17:27:57.057496 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 17:27:57.057508 | orchestrator | Friday 29 August 2025 17:27:31 +0000 (0:00:18.178) 0:00:45.407 ********* 2025-08-29 17:27:57.057519 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.057530 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.057540 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.057551 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.057562 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.057573 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.057584 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.057595 | orchestrator | 2025-08-29 17:27:57.057606 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 17:27:57.057617 | orchestrator | Friday 29 August 2025 17:27:31 +0000 (0:00:00.182) 0:00:45.590 ********* 2025-08-29 17:27:57.057628 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.057639 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.057649 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.057660 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.057671 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.057682 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.057693 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.057703 | orchestrator | 2025-08-29 17:27:57.057715 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 17:27:57.057728 | orchestrator | Friday 29 August 2025 17:27:31 +0000 (0:00:00.206) 0:00:45.796 ********* 2025-08-29 17:27:57.057741 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.057753 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.057766 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.057778 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.057791 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.057803 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.057825 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.057836 | orchestrator | 2025-08-29 17:27:57.057847 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 17:27:57.057858 | orchestrator | Friday 29 August 2025 17:27:31 +0000 (0:00:00.189) 0:00:45.986 ********* 2025-08-29 17:27:57.057871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:57.057905 | orchestrator | 2025-08-29 17:27:57.057917 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 17:27:57.057928 | orchestrator | Friday 29 August 2025 17:27:32 +0000 (0:00:00.245) 0:00:46.231 ********* 2025-08-29 17:27:57.057939 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.057950 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.057961 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.057971 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.057982 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.057993 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.058003 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.058066 | orchestrator | 2025-08-29 17:27:57.058079 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 17:27:57.058091 | orchestrator | Friday 29 August 2025 17:27:34 +0000 (0:00:01.845) 0:00:48.077 ********* 2025-08-29 17:27:57.058113 | orchestrator | changed: [testbed-manager] 2025-08-29 17:27:57.058125 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:57.058136 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:57.058147 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:57.058158 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:57.058168 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:57.058179 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:57.058190 | orchestrator | 2025-08-29 17:27:57.058201 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 17:27:57.058212 | orchestrator | Friday 29 August 2025 17:27:35 +0000 (0:00:01.098) 0:00:49.175 ********* 2025-08-29 17:27:57.058223 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.058233 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.058244 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.058255 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.058265 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.058276 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.058287 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.058297 | orchestrator | 2025-08-29 17:27:57.058308 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 17:27:57.058332 | orchestrator | Friday 29 August 2025 17:27:35 +0000 (0:00:00.861) 0:00:50.036 ********* 2025-08-29 17:27:57.058344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:57.058356 | orchestrator | 2025-08-29 17:27:57.058368 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 17:27:57.058379 | orchestrator | Friday 29 August 2025 17:27:36 +0000 (0:00:00.328) 0:00:50.365 ********* 2025-08-29 17:27:57.058390 | orchestrator | changed: [testbed-manager] 2025-08-29 17:27:57.058401 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:57.058412 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:57.058423 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:57.058434 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:57.058444 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:57.058455 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:57.058466 | orchestrator | 2025-08-29 17:27:57.058492 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 17:27:57.058504 | orchestrator | Friday 29 August 2025 17:27:37 +0000 (0:00:01.122) 0:00:51.487 ********* 2025-08-29 17:27:57.058515 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:27:57.058526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:27:57.058537 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:27:57.058548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:27:57.058558 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:27:57.058578 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:27:57.058589 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:27:57.058600 | orchestrator | 2025-08-29 17:27:57.058610 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 17:27:57.058621 | orchestrator | Friday 29 August 2025 17:27:37 +0000 (0:00:00.323) 0:00:51.811 ********* 2025-08-29 17:27:57.058632 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:57.058643 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:57.058653 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:57.058664 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:57.058675 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:57.058685 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:57.058696 | orchestrator | changed: [testbed-manager] 2025-08-29 17:27:57.058707 | orchestrator | 2025-08-29 17:27:57.058717 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 17:27:57.058728 | orchestrator | Friday 29 August 2025 17:27:51 +0000 (0:00:14.093) 0:01:05.904 ********* 2025-08-29 17:27:57.058739 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.058750 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.058761 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.058771 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.058782 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.058793 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.058804 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.058814 | orchestrator | 2025-08-29 17:27:57.058825 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 17:27:57.058836 | orchestrator | Friday 29 August 2025 17:27:52 +0000 (0:00:00.816) 0:01:06.720 ********* 2025-08-29 17:27:57.058847 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.058858 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.058869 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.058880 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.058890 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.058901 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.058912 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.058922 | orchestrator | 2025-08-29 17:27:57.058933 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 17:27:57.058944 | orchestrator | Friday 29 August 2025 17:27:53 +0000 (0:00:00.944) 0:01:07.665 ********* 2025-08-29 17:27:57.058955 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.058980 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.058992 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.059004 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.059015 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.059027 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.059039 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.059051 | orchestrator | 2025-08-29 17:27:57.059062 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 17:27:57.059075 | orchestrator | Friday 29 August 2025 17:27:53 +0000 (0:00:00.253) 0:01:07.919 ********* 2025-08-29 17:27:57.059086 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.059098 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.059109 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.059121 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.059132 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.059144 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.059156 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.059168 | orchestrator | 2025-08-29 17:27:57.059179 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 17:27:57.059191 | orchestrator | Friday 29 August 2025 17:27:54 +0000 (0:00:00.275) 0:01:08.194 ********* 2025-08-29 17:27:57.059203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:27:57.059222 | orchestrator | 2025-08-29 17:27:57.059234 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 17:27:57.059245 | orchestrator | Friday 29 August 2025 17:27:54 +0000 (0:00:00.351) 0:01:08.546 ********* 2025-08-29 17:27:57.059256 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.059272 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.059284 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.059295 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.059306 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.059317 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.059343 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.059354 | orchestrator | 2025-08-29 17:27:57.059366 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 17:27:57.059377 | orchestrator | Friday 29 August 2025 17:27:56 +0000 (0:00:01.667) 0:01:10.213 ********* 2025-08-29 17:27:57.059388 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:27:57.059399 | orchestrator | changed: [testbed-manager] 2025-08-29 17:27:57.059411 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:27:57.059422 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:27:57.059433 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:27:57.059444 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:27:57.059455 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:27:57.059466 | orchestrator | 2025-08-29 17:27:57.059477 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 17:27:57.059488 | orchestrator | Friday 29 August 2025 17:27:56 +0000 (0:00:00.644) 0:01:10.858 ********* 2025-08-29 17:27:57.059499 | orchestrator | ok: [testbed-manager] 2025-08-29 17:27:57.059511 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:27:57.059522 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:27:57.059533 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:27:57.059544 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:27:57.059555 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:27:57.059566 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:27:57.059577 | orchestrator | 2025-08-29 17:27:57.059595 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 17:30:19.809054 | orchestrator | Friday 29 August 2025 17:27:57 +0000 (0:00:00.239) 0:01:11.097 ********* 2025-08-29 17:30:19.809165 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:19.809181 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:19.809193 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:19.809205 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:19.809215 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:19.809226 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:19.809237 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:19.809248 | orchestrator | 2025-08-29 17:30:19.809260 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 17:30:19.809271 | orchestrator | Friday 29 August 2025 17:27:58 +0000 (0:00:01.302) 0:01:12.400 ********* 2025-08-29 17:30:19.809282 | orchestrator | changed: [testbed-manager] 2025-08-29 17:30:19.809294 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:19.809305 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:19.809315 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:19.809326 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:19.809337 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:19.809423 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:19.809434 | orchestrator | 2025-08-29 17:30:19.809445 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 17:30:19.809456 | orchestrator | Friday 29 August 2025 17:28:00 +0000 (0:00:01.716) 0:01:14.116 ********* 2025-08-29 17:30:19.809467 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:19.809478 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:19.809489 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:19.809499 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:19.809510 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:19.809521 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:19.809560 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:19.809571 | orchestrator | 2025-08-29 17:30:19.809582 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 17:30:19.809594 | orchestrator | Friday 29 August 2025 17:28:02 +0000 (0:00:02.487) 0:01:16.603 ********* 2025-08-29 17:30:19.809606 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:19.809618 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:19.809630 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:19.809642 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:19.809654 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:19.809665 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:19.809678 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:19.809690 | orchestrator | 2025-08-29 17:30:19.809702 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 17:30:19.809715 | orchestrator | Friday 29 August 2025 17:28:39 +0000 (0:00:37.371) 0:01:53.975 ********* 2025-08-29 17:30:19.809727 | orchestrator | changed: [testbed-manager] 2025-08-29 17:30:19.809737 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:19.809748 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:19.809759 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:19.809770 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:19.809780 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:19.809791 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:19.809801 | orchestrator | 2025-08-29 17:30:19.809812 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 17:30:19.809823 | orchestrator | Friday 29 August 2025 17:30:00 +0000 (0:01:20.346) 0:03:14.322 ********* 2025-08-29 17:30:19.809833 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:19.809844 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:19.809854 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:19.809865 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:19.809875 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:19.809886 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:19.809896 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:19.809906 | orchestrator | 2025-08-29 17:30:19.809917 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 17:30:19.809929 | orchestrator | Friday 29 August 2025 17:30:02 +0000 (0:00:01.909) 0:03:16.232 ********* 2025-08-29 17:30:19.809940 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:19.809950 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:19.809961 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:19.809971 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:19.809982 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:19.809992 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:19.810003 | orchestrator | changed: [testbed-manager] 2025-08-29 17:30:19.810013 | orchestrator | 2025-08-29 17:30:19.810086 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 17:30:19.810097 | orchestrator | Friday 29 August 2025 17:30:13 +0000 (0:00:11.799) 0:03:28.032 ********* 2025-08-29 17:30:19.810134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 17:30:19.810156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 17:30:19.810203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 17:30:19.810218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 17:30:19.810229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 17:30:19.810241 | orchestrator | 2025-08-29 17:30:19.810252 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 17:30:19.810263 | orchestrator | Friday 29 August 2025 17:30:14 +0000 (0:00:00.446) 0:03:28.478 ********* 2025-08-29 17:30:19.810274 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:30:19.810284 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:30:19.810295 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:30:19.810306 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:30:19.810317 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:19.810327 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:19.810356 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 17:30:19.810368 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:19.810379 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:30:19.810390 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:30:19.810400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 17:30:19.810411 | orchestrator | 2025-08-29 17:30:19.810421 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 17:30:19.810432 | orchestrator | Friday 29 August 2025 17:30:15 +0000 (0:00:00.663) 0:03:29.142 ********* 2025-08-29 17:30:19.810443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:30:19.810455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:30:19.810466 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:30:19.810476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:30:19.810487 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:30:19.810498 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:30:19.810508 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:30:19.810519 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:30:19.810529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:30:19.810540 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:30:19.810559 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:30:19.810570 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:30:19.810581 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:30:19.810592 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:30:19.810602 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:30:19.810613 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:30:19.810624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:30:19.810634 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:30:19.810645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:30:19.810656 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:30:19.810666 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:30:19.810684 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:30:21.993425 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:30:21.993519 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:30:21.993533 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:30:21.993546 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:30:21.993557 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:30:21.993568 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:30:21.993579 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:30:21.993591 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:21.993603 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:30:21.993613 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:30:21.993624 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:21.993635 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 17:30:21.993645 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 17:30:21.993656 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 17:30:21.993667 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 17:30:21.993677 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 17:30:21.993688 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 17:30:21.993699 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 17:30:21.993709 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 17:30:21.993719 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 17:30:21.993730 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 17:30:21.993741 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:21.993778 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 17:30:21.993790 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 17:30:21.993800 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 17:30:21.993811 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 17:30:21.993821 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 17:30:21.993847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 17:30:21.993858 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 17:30:21.993885 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 17:30:21.993897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 17:30:21.993912 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 17:30:21.993924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 17:30:21.993934 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 17:30:21.993945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 17:30:21.993956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 17:30:21.993966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 17:30:21.993977 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 17:30:21.993987 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 17:30:21.993998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 17:30:21.994008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 17:30:21.994070 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 17:30:21.994082 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 17:30:21.994110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 17:30:21.994122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 17:30:21.994133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 17:30:21.994144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 17:30:21.994155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 17:30:21.994166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 17:30:21.994177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 17:30:21.994187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 17:30:21.994198 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 17:30:21.994209 | orchestrator | 2025-08-29 17:30:21.994221 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 17:30:21.994232 | orchestrator | Friday 29 August 2025 17:30:19 +0000 (0:00:04.708) 0:03:33.850 ********* 2025-08-29 17:30:21.994252 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994273 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994284 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994295 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994306 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994316 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 17:30:21.994327 | orchestrator | 2025-08-29 17:30:21.994356 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 17:30:21.994368 | orchestrator | Friday 29 August 2025 17:30:20 +0000 (0:00:00.601) 0:03:34.451 ********* 2025-08-29 17:30:21.994384 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:30:21.994395 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:30:21.994406 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:30:21.994417 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:30:21.994428 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:21.994438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:21.994449 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 17:30:21.994460 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:21.994471 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 17:30:21.994482 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 17:30:21.994492 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 17:30:21.994503 | orchestrator | 2025-08-29 17:30:21.994514 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 17:30:21.994525 | orchestrator | Friday 29 August 2025 17:30:21 +0000 (0:00:00.603) 0:03:35.055 ********* 2025-08-29 17:30:21.994535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:30:21.994551 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:30:21.994563 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:30:21.994573 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:30:21.994584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:21.994595 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 17:30:21.994605 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:21.994616 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:21.994627 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 17:30:21.994638 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 17:30:21.994648 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 17:30:21.994659 | orchestrator | 2025-08-29 17:30:21.994670 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 17:30:21.994681 | orchestrator | Friday 29 August 2025 17:30:21 +0000 (0:00:00.686) 0:03:35.742 ********* 2025-08-29 17:30:21.994692 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:30:21.994702 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:21.994721 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:21.994732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:21.994743 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:21.994761 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:34.648942 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:34.649036 | orchestrator | 2025-08-29 17:30:34.649053 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 17:30:34.649066 | orchestrator | Friday 29 August 2025 17:30:21 +0000 (0:00:00.296) 0:03:36.039 ********* 2025-08-29 17:30:34.649077 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:34.649088 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:34.649098 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:34.649109 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:34.649120 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:34.649130 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:34.649141 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:34.649151 | orchestrator | 2025-08-29 17:30:34.649163 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 17:30:34.649174 | orchestrator | Friday 29 August 2025 17:30:27 +0000 (0:00:05.971) 0:03:42.010 ********* 2025-08-29 17:30:34.649185 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 17:30:34.649196 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:30:34.649206 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 17:30:34.649217 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:30:34.649228 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 17:30:34.649238 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:30:34.649249 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 17:30:34.649260 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:30:34.649270 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 17:30:34.649281 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:30:34.649291 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 17:30:34.649302 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:30:34.649313 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 17:30:34.649323 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:30:34.649334 | orchestrator | 2025-08-29 17:30:34.649381 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 17:30:34.649394 | orchestrator | Friday 29 August 2025 17:30:28 +0000 (0:00:00.357) 0:03:42.367 ********* 2025-08-29 17:30:34.649405 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 17:30:34.649416 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 17:30:34.649427 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 17:30:34.649437 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 17:30:34.649448 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 17:30:34.649458 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 17:30:34.649469 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 17:30:34.649480 | orchestrator | 2025-08-29 17:30:34.649490 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 17:30:34.649501 | orchestrator | Friday 29 August 2025 17:30:29 +0000 (0:00:01.254) 0:03:43.622 ********* 2025-08-29 17:30:34.649516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:30:34.649530 | orchestrator | 2025-08-29 17:30:34.649543 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 17:30:34.649556 | orchestrator | Friday 29 August 2025 17:30:30 +0000 (0:00:00.448) 0:03:44.071 ********* 2025-08-29 17:30:34.649568 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:34.649580 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:34.649592 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:34.649626 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:34.649639 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:34.649651 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:34.649663 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:34.649675 | orchestrator | 2025-08-29 17:30:34.649687 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 17:30:34.649699 | orchestrator | Friday 29 August 2025 17:30:31 +0000 (0:00:01.691) 0:03:45.762 ********* 2025-08-29 17:30:34.649712 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:34.649725 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:34.649736 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:34.649748 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:34.649762 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:34.649779 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:34.649799 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:34.649819 | orchestrator | 2025-08-29 17:30:34.649852 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 17:30:34.649865 | orchestrator | Friday 29 August 2025 17:30:32 +0000 (0:00:00.587) 0:03:46.350 ********* 2025-08-29 17:30:34.649876 | orchestrator | changed: [testbed-manager] 2025-08-29 17:30:34.649887 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:30:34.649897 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:30:34.649908 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:30:34.649919 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:30:34.649929 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:30:34.649940 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:30:34.649951 | orchestrator | 2025-08-29 17:30:34.649961 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 17:30:34.649972 | orchestrator | Friday 29 August 2025 17:30:32 +0000 (0:00:00.596) 0:03:46.946 ********* 2025-08-29 17:30:34.649983 | orchestrator | ok: [testbed-manager] 2025-08-29 17:30:34.649993 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:30:34.650004 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:30:34.650066 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:30:34.650079 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:30:34.650089 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:30:34.650100 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:30:34.650111 | orchestrator | 2025-08-29 17:30:34.650122 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 17:30:34.650164 | orchestrator | Friday 29 August 2025 17:30:33 +0000 (0:00:00.671) 0:03:47.618 ********* 2025-08-29 17:30:34.650197 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487087.541527, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650213 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487082.3229787, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650226 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487032.8596804, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650248 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487062.6924634, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650259 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487065.0294144, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650271 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487056.652502, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650282 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756487076.901911, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:30:34.650310 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973271 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973434 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973474 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973487 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973504 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973516 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 17:31:00.973528 | orchestrator | 2025-08-29 17:31:00.973542 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 17:31:00.973554 | orchestrator | Friday 29 August 2025 17:30:34 +0000 (0:00:01.063) 0:03:48.681 ********* 2025-08-29 17:31:00.973566 | orchestrator | changed: [testbed-manager] 2025-08-29 17:31:00.973578 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:31:00.973588 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:31:00.973600 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:31:00.973610 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:31:00.973621 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:31:00.973632 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:31:00.973643 | orchestrator | 2025-08-29 17:31:00.973654 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 17:31:00.973665 | orchestrator | Friday 29 August 2025 17:30:35 +0000 (0:00:01.143) 0:03:49.824 ********* 2025-08-29 17:31:00.973677 | orchestrator | changed: [testbed-manager] 2025-08-29 17:31:00.973688 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:31:00.973699 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:31:00.973711 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:31:00.973735 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:31:00.973747 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:31:00.973758 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:31:00.973769 | orchestrator | 2025-08-29 17:31:00.973780 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 17:31:00.973798 | orchestrator | Friday 29 August 2025 17:30:36 +0000 (0:00:01.153) 0:03:50.978 ********* 2025-08-29 17:31:00.973809 | orchestrator | changed: [testbed-manager] 2025-08-29 17:31:00.973820 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:31:00.973832 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:31:00.973845 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:31:00.973858 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:31:00.973870 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:31:00.973881 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:31:00.973893 | orchestrator | 2025-08-29 17:31:00.973906 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 17:31:00.973919 | orchestrator | Friday 29 August 2025 17:30:38 +0000 (0:00:01.252) 0:03:52.230 ********* 2025-08-29 17:31:00.973931 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:31:00.973943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:31:00.973955 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:31:00.973967 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:31:00.973979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:31:00.973992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:31:00.974004 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:31:00.974069 | orchestrator | 2025-08-29 17:31:00.974083 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 17:31:00.974096 | orchestrator | Friday 29 August 2025 17:30:38 +0000 (0:00:00.298) 0:03:52.529 ********* 2025-08-29 17:31:00.974108 | orchestrator | ok: [testbed-manager] 2025-08-29 17:31:00.974122 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:31:00.974135 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:31:00.974146 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:31:00.974158 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:31:00.974170 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:31:00.974183 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:31:00.974195 | orchestrator | 2025-08-29 17:31:00.974206 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 17:31:00.974217 | orchestrator | Friday 29 August 2025 17:30:39 +0000 (0:00:00.766) 0:03:53.296 ********* 2025-08-29 17:31:00.974229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:31:00.974242 | orchestrator | 2025-08-29 17:31:00.974253 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 17:31:00.974264 | orchestrator | Friday 29 August 2025 17:30:39 +0000 (0:00:00.494) 0:03:53.790 ********* 2025-08-29 17:31:00.974275 | orchestrator | ok: [testbed-manager] 2025-08-29 17:31:00.974286 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:31:00.974297 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:31:00.974308 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:31:00.974319 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:31:00.974329 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:31:00.974340 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:31:00.974378 | orchestrator | 2025-08-29 17:31:00.974390 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 17:31:00.974401 | orchestrator | Friday 29 August 2025 17:30:48 +0000 (0:00:08.465) 0:04:02.255 ********* 2025-08-29 17:31:00.974412 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:31:00.974423 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:31:00.974434 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:31:00.974445 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:31:00.974456 | orchestrator | ok: [testbed-manager] 2025-08-29 17:31:00.974466 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:31:00.974477 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:31:00.974488 | orchestrator | 2025-08-29 17:31:00.974498 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 17:31:00.974517 | orchestrator | Friday 29 August 2025 17:30:49 +0000 (0:00:01.299) 0:04:03.555 ********* 2025-08-29 17:31:00.974528 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:31:00.974539 | orchestrator | ok: [testbed-manager] 2025-08-29 17:31:00.974555 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:31:00.974566 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:31:00.974577 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:31:00.974587 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:31:00.974598 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:31:00.974609 | orchestrator | 2025-08-29 17:31:00.974620 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 17:31:00.974631 | orchestrator | Friday 29 August 2025 17:30:50 +0000 (0:00:01.058) 0:04:04.614 ********* 2025-08-29 17:31:00.974642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:31:00.974653 | orchestrator | 2025-08-29 17:31:00.974664 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 17:31:00.974675 | orchestrator | Friday 29 August 2025 17:30:51 +0000 (0:00:00.595) 0:04:05.210 ********* 2025-08-29 17:31:00.974686 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:31:00.974697 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:31:00.974708 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:31:00.974719 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:31:00.974729 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:31:00.974740 | orchestrator | changed: [testbed-manager] 2025-08-29 17:31:00.974751 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:31:00.974762 | orchestrator | 2025-08-29 17:31:00.974773 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 17:31:00.974784 | orchestrator | Friday 29 August 2025 17:31:00 +0000 (0:00:09.142) 0:04:14.353 ********* 2025-08-29 17:31:00.974795 | orchestrator | changed: [testbed-manager] 2025-08-29 17:31:00.974806 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:31:00.974816 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:31:00.974836 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.967298 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.967469 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.967485 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.967498 | orchestrator | 2025-08-29 17:32:10.967511 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 17:32:10.967523 | orchestrator | Friday 29 August 2025 17:31:00 +0000 (0:00:00.660) 0:04:15.014 ********* 2025-08-29 17:32:10.967535 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:10.967546 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:10.967557 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:10.967568 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.967579 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.967589 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.967600 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.967611 | orchestrator | 2025-08-29 17:32:10.967623 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 17:32:10.967634 | orchestrator | Friday 29 August 2025 17:31:02 +0000 (0:00:01.187) 0:04:16.201 ********* 2025-08-29 17:32:10.967645 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:10.967656 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:10.967667 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:10.967677 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.967688 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.967699 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.967710 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.967720 | orchestrator | 2025-08-29 17:32:10.967731 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 17:32:10.967742 | orchestrator | Friday 29 August 2025 17:31:03 +0000 (0:00:01.160) 0:04:17.362 ********* 2025-08-29 17:32:10.967777 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:10.967790 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:10.967801 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:10.967812 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:10.967822 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:10.967835 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:10.967847 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:10.967859 | orchestrator | 2025-08-29 17:32:10.967871 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 17:32:10.967884 | orchestrator | Friday 29 August 2025 17:31:03 +0000 (0:00:00.341) 0:04:17.703 ********* 2025-08-29 17:32:10.967897 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:10.967908 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:10.967920 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:10.967932 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:10.967943 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:10.967956 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:10.967968 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:10.967980 | orchestrator | 2025-08-29 17:32:10.967992 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 17:32:10.968005 | orchestrator | Friday 29 August 2025 17:31:03 +0000 (0:00:00.322) 0:04:18.026 ********* 2025-08-29 17:32:10.968017 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:10.968029 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:10.968041 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:10.968052 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:10.968064 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:10.968076 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:10.968088 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:10.968100 | orchestrator | 2025-08-29 17:32:10.968112 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 17:32:10.968124 | orchestrator | Friday 29 August 2025 17:31:04 +0000 (0:00:00.302) 0:04:18.328 ********* 2025-08-29 17:32:10.968136 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:10.968148 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:10.968160 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:10.968172 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:10.968184 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:10.968195 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:10.968206 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:10.968216 | orchestrator | 2025-08-29 17:32:10.968228 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 17:32:10.968239 | orchestrator | Friday 29 August 2025 17:31:09 +0000 (0:00:05.603) 0:04:23.932 ********* 2025-08-29 17:32:10.968265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:32:10.968280 | orchestrator | 2025-08-29 17:32:10.968291 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 17:32:10.968302 | orchestrator | Friday 29 August 2025 17:31:10 +0000 (0:00:00.519) 0:04:24.452 ********* 2025-08-29 17:32:10.968313 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968323 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 17:32:10.968335 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968346 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 17:32:10.968377 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:10.968389 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968400 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 17:32:10.968411 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:10.968422 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968441 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 17:32:10.968452 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:10.968463 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:10.968474 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968485 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 17:32:10.968496 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968507 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 17:32:10.968518 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:10.968545 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:10.968557 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 17:32:10.968568 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 17:32:10.968580 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:10.968591 | orchestrator | 2025-08-29 17:32:10.968602 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 17:32:10.968613 | orchestrator | Friday 29 August 2025 17:31:10 +0000 (0:00:00.385) 0:04:24.837 ********* 2025-08-29 17:32:10.968624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:32:10.968636 | orchestrator | 2025-08-29 17:32:10.968647 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 17:32:10.968658 | orchestrator | Friday 29 August 2025 17:31:11 +0000 (0:00:00.490) 0:04:25.328 ********* 2025-08-29 17:32:10.968669 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 17:32:10.968679 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 17:32:10.968690 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:10.968701 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 17:32:10.968712 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:10.968723 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 17:32:10.968734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:10.968744 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 17:32:10.968755 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:10.968766 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 17:32:10.968777 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:10.968788 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:10.968799 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 17:32:10.968810 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:10.968820 | orchestrator | 2025-08-29 17:32:10.968831 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 17:32:10.968842 | orchestrator | Friday 29 August 2025 17:31:11 +0000 (0:00:00.334) 0:04:25.663 ********* 2025-08-29 17:32:10.968853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:32:10.968864 | orchestrator | 2025-08-29 17:32:10.968875 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 17:32:10.968886 | orchestrator | Friday 29 August 2025 17:31:12 +0000 (0:00:00.645) 0:04:26.308 ********* 2025-08-29 17:32:10.968897 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.968908 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.968919 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:10.968929 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:10.968940 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.968951 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.968968 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:10.968979 | orchestrator | 2025-08-29 17:32:10.968990 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 17:32:10.969001 | orchestrator | Friday 29 August 2025 17:31:47 +0000 (0:00:34.939) 0:05:01.248 ********* 2025-08-29 17:32:10.969012 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:10.969022 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:10.969033 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.969044 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.969055 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:10.969066 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.969077 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.969087 | orchestrator | 2025-08-29 17:32:10.969099 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 17:32:10.969110 | orchestrator | Friday 29 August 2025 17:31:55 +0000 (0:00:07.897) 0:05:09.145 ********* 2025-08-29 17:32:10.969121 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.969132 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:10.969142 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.969153 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:10.969164 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.969175 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.969186 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:10.969196 | orchestrator | 2025-08-29 17:32:10.969207 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 17:32:10.969218 | orchestrator | Friday 29 August 2025 17:32:03 +0000 (0:00:07.923) 0:05:17.069 ********* 2025-08-29 17:32:10.969229 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:10.969240 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:10.969251 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:10.969262 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:10.969273 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:10.969284 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:10.969294 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:10.969305 | orchestrator | 2025-08-29 17:32:10.969316 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 17:32:10.969327 | orchestrator | Friday 29 August 2025 17:32:04 +0000 (0:00:01.777) 0:05:18.846 ********* 2025-08-29 17:32:10.969338 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:10.969349 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:10.969377 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:10.969388 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:10.969399 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:10.969410 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:10.969421 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:10.969432 | orchestrator | 2025-08-29 17:32:10.969443 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 17:32:10.969460 | orchestrator | Friday 29 August 2025 17:32:10 +0000 (0:00:06.157) 0:05:25.004 ********* 2025-08-29 17:32:22.777277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:32:22.777430 | orchestrator | 2025-08-29 17:32:22.777448 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 17:32:22.777461 | orchestrator | Friday 29 August 2025 17:32:11 +0000 (0:00:00.462) 0:05:25.466 ********* 2025-08-29 17:32:22.777473 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:22.777484 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:22.777495 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:22.777506 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:22.777517 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:22.777527 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:22.777538 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:22.777573 | orchestrator | 2025-08-29 17:32:22.777585 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 17:32:22.777596 | orchestrator | Friday 29 August 2025 17:32:12 +0000 (0:00:00.726) 0:05:26.192 ********* 2025-08-29 17:32:22.777606 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:22.777618 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:22.777628 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:22.777639 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:22.777650 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:22.777660 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:22.777671 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:22.777681 | orchestrator | 2025-08-29 17:32:22.777692 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 17:32:22.777703 | orchestrator | Friday 29 August 2025 17:32:13 +0000 (0:00:01.697) 0:05:27.889 ********* 2025-08-29 17:32:22.777714 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:32:22.777724 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:32:22.777735 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:32:22.777745 | orchestrator | changed: [testbed-manager] 2025-08-29 17:32:22.777756 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:32:22.777784 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:32:22.777795 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:32:22.777806 | orchestrator | 2025-08-29 17:32:22.777816 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 17:32:22.777827 | orchestrator | Friday 29 August 2025 17:32:14 +0000 (0:00:00.857) 0:05:28.747 ********* 2025-08-29 17:32:22.777838 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:22.777848 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:22.777859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:22.777869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:22.777880 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:22.777890 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:22.777901 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:22.777911 | orchestrator | 2025-08-29 17:32:22.777922 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 17:32:22.777933 | orchestrator | Friday 29 August 2025 17:32:15 +0000 (0:00:00.359) 0:05:29.107 ********* 2025-08-29 17:32:22.777943 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:22.777955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:22.777966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:22.777976 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:22.777987 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:22.777997 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:22.778008 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:22.778075 | orchestrator | 2025-08-29 17:32:22.778086 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 17:32:22.778097 | orchestrator | Friday 29 August 2025 17:32:15 +0000 (0:00:00.430) 0:05:29.537 ********* 2025-08-29 17:32:22.778108 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:22.778119 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:22.778160 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:22.778172 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:22.778183 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:22.778193 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:22.778204 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:22.778214 | orchestrator | 2025-08-29 17:32:22.778231 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 17:32:22.778242 | orchestrator | Friday 29 August 2025 17:32:15 +0000 (0:00:00.355) 0:05:29.893 ********* 2025-08-29 17:32:22.778254 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:22.778272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:22.778291 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:22.778310 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:22.778329 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:22.778386 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:22.778400 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:22.778411 | orchestrator | 2025-08-29 17:32:22.778422 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 17:32:22.778434 | orchestrator | Friday 29 August 2025 17:32:16 +0000 (0:00:00.407) 0:05:30.300 ********* 2025-08-29 17:32:22.778445 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:22.778455 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:22.778466 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:22.778478 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:22.778489 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:22.778499 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:22.778510 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:22.778521 | orchestrator | 2025-08-29 17:32:22.778532 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 17:32:22.778543 | orchestrator | Friday 29 August 2025 17:32:16 +0000 (0:00:00.336) 0:05:30.637 ********* 2025-08-29 17:32:22.778554 | orchestrator | ok: [testbed-manager] =>  2025-08-29 17:32:22.778565 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778575 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 17:32:22.778586 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778597 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 17:32:22.778607 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778618 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 17:32:22.778629 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778640 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 17:32:22.778650 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778679 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 17:32:22.778691 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778702 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 17:32:22.778713 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 17:32:22.778723 | orchestrator | 2025-08-29 17:32:22.778734 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 17:32:22.778745 | orchestrator | Friday 29 August 2025 17:32:16 +0000 (0:00:00.319) 0:05:30.956 ********* 2025-08-29 17:32:22.778756 | orchestrator | ok: [testbed-manager] =>  2025-08-29 17:32:22.778767 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778778 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 17:32:22.778788 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778799 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 17:32:22.778809 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778820 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 17:32:22.778831 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778841 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 17:32:22.778852 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778863 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 17:32:22.778873 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778884 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 17:32:22.778894 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 17:32:22.778905 | orchestrator | 2025-08-29 17:32:22.778916 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 17:32:22.778927 | orchestrator | Friday 29 August 2025 17:32:17 +0000 (0:00:00.487) 0:05:31.444 ********* 2025-08-29 17:32:22.778937 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:22.778948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:22.778959 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:22.778969 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:22.778980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:22.778991 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:22.779001 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:22.779012 | orchestrator | 2025-08-29 17:32:22.779023 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 17:32:22.779041 | orchestrator | Friday 29 August 2025 17:32:17 +0000 (0:00:00.283) 0:05:31.727 ********* 2025-08-29 17:32:22.779053 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:22.779064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:22.779075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:22.779085 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:32:22.779096 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:32:22.779106 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:32:22.779117 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:32:22.779128 | orchestrator | 2025-08-29 17:32:22.779138 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 17:32:22.779149 | orchestrator | Friday 29 August 2025 17:32:17 +0000 (0:00:00.309) 0:05:32.037 ********* 2025-08-29 17:32:22.779162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:32:22.779175 | orchestrator | 2025-08-29 17:32:22.779186 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 17:32:22.779197 | orchestrator | Friday 29 August 2025 17:32:18 +0000 (0:00:00.438) 0:05:32.475 ********* 2025-08-29 17:32:22.779208 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:22.779219 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:22.779229 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:22.779240 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:22.779251 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:22.779261 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:22.779272 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:22.779282 | orchestrator | 2025-08-29 17:32:22.779293 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 17:32:22.779304 | orchestrator | Friday 29 August 2025 17:32:19 +0000 (0:00:00.821) 0:05:33.296 ********* 2025-08-29 17:32:22.779315 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:32:22.779325 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:32:22.779341 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:32:22.779380 | orchestrator | ok: [testbed-manager] 2025-08-29 17:32:22.779392 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:32:22.779403 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:32:22.779413 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:32:22.779424 | orchestrator | 2025-08-29 17:32:22.779435 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 17:32:22.779447 | orchestrator | Friday 29 August 2025 17:32:22 +0000 (0:00:02.919) 0:05:36.216 ********* 2025-08-29 17:32:22.779458 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 17:32:22.779469 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 17:32:22.779480 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 17:32:22.779491 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 17:32:22.779502 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 17:32:22.779513 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 17:32:22.779523 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:32:22.779534 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 17:32:22.779545 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 17:32:22.779556 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 17:32:22.779566 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:32:22.779577 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 17:32:22.779588 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 17:32:22.779598 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 17:32:22.779609 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:32:22.779620 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 17:32:22.779637 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 17:32:22.779655 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 17:33:23.916141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:23.916241 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 17:33:23.916256 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 17:33:23.916268 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 17:33:23.916279 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:23.916290 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:23.916301 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 17:33:23.916312 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 17:33:23.916323 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 17:33:23.916334 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:23.916346 | orchestrator | 2025-08-29 17:33:23.916400 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 17:33:23.916413 | orchestrator | Friday 29 August 2025 17:32:23 +0000 (0:00:00.856) 0:05:37.072 ********* 2025-08-29 17:33:23.916424 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.916436 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.916447 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.916457 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.916468 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.916479 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.916490 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.916500 | orchestrator | 2025-08-29 17:33:23.916511 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 17:33:23.916523 | orchestrator | Friday 29 August 2025 17:32:29 +0000 (0:00:06.610) 0:05:43.683 ********* 2025-08-29 17:33:23.916534 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.916544 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.916555 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.916566 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.916585 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.916604 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.916624 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.916640 | orchestrator | 2025-08-29 17:33:23.916659 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 17:33:23.916679 | orchestrator | Friday 29 August 2025 17:32:30 +0000 (0:00:01.092) 0:05:44.775 ********* 2025-08-29 17:33:23.916700 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.916720 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.916736 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.916747 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.916758 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.916768 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.916779 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.916790 | orchestrator | 2025-08-29 17:33:23.916800 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 17:33:23.916811 | orchestrator | Friday 29 August 2025 17:32:38 +0000 (0:00:07.991) 0:05:52.767 ********* 2025-08-29 17:33:23.916822 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.916833 | orchestrator | changed: [testbed-manager] 2025-08-29 17:33:23.916843 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.916854 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.916865 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.916875 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.916886 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.916896 | orchestrator | 2025-08-29 17:33:23.916907 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 17:33:23.916918 | orchestrator | Friday 29 August 2025 17:32:42 +0000 (0:00:03.295) 0:05:56.062 ********* 2025-08-29 17:33:23.916929 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.916966 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.916977 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.916987 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.916998 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.917008 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.917018 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.917029 | orchestrator | 2025-08-29 17:33:23.917039 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 17:33:23.917064 | orchestrator | Friday 29 August 2025 17:32:43 +0000 (0:00:01.725) 0:05:57.788 ********* 2025-08-29 17:33:23.917075 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.917086 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.917097 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.917108 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.917118 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.917128 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.917139 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.917149 | orchestrator | 2025-08-29 17:33:23.917159 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 17:33:23.917170 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:01.380) 0:05:59.168 ********* 2025-08-29 17:33:23.917180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:23.917191 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:23.917201 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:23.917211 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:23.917222 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:23.917232 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:23.917243 | orchestrator | changed: [testbed-manager] 2025-08-29 17:33:23.917253 | orchestrator | 2025-08-29 17:33:23.917263 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 17:33:23.917274 | orchestrator | Friday 29 August 2025 17:32:45 +0000 (0:00:00.714) 0:05:59.883 ********* 2025-08-29 17:33:23.917285 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.917295 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.917305 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.917316 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.917326 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.917336 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.917347 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.917377 | orchestrator | 2025-08-29 17:33:23.917388 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 17:33:23.917399 | orchestrator | Friday 29 August 2025 17:32:56 +0000 (0:00:10.328) 0:06:10.212 ********* 2025-08-29 17:33:23.917410 | orchestrator | changed: [testbed-manager] 2025-08-29 17:33:23.917437 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.917449 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.917459 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.917470 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.917480 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.917491 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.917501 | orchestrator | 2025-08-29 17:33:23.917512 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 17:33:23.917523 | orchestrator | Friday 29 August 2025 17:32:57 +0000 (0:00:00.990) 0:06:11.203 ********* 2025-08-29 17:33:23.917534 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.917544 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.917555 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.917565 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.917575 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.917586 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.917596 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.917607 | orchestrator | 2025-08-29 17:33:23.917617 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 17:33:23.917640 | orchestrator | Friday 29 August 2025 17:33:06 +0000 (0:00:09.409) 0:06:20.612 ********* 2025-08-29 17:33:23.917651 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.917662 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.917672 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.917682 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.917693 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.917703 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.917714 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.917724 | orchestrator | 2025-08-29 17:33:23.917735 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 17:33:23.917745 | orchestrator | Friday 29 August 2025 17:33:17 +0000 (0:00:11.079) 0:06:31.692 ********* 2025-08-29 17:33:23.917756 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 17:33:23.917767 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 17:33:23.917777 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 17:33:23.917788 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 17:33:23.917798 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 17:33:23.917809 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 17:33:23.917819 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 17:33:23.917829 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 17:33:23.917840 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 17:33:23.917850 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 17:33:23.917861 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 17:33:23.917871 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 17:33:23.917882 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 17:33:23.917892 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 17:33:23.917902 | orchestrator | 2025-08-29 17:33:23.917913 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 17:33:23.917924 | orchestrator | Friday 29 August 2025 17:33:18 +0000 (0:00:01.157) 0:06:32.850 ********* 2025-08-29 17:33:23.917934 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:23.917945 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:23.917955 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:23.917965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:23.917976 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:23.917986 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:23.917996 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:23.918007 | orchestrator | 2025-08-29 17:33:23.918107 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 17:33:23.918121 | orchestrator | Friday 29 August 2025 17:33:19 +0000 (0:00:00.516) 0:06:33.366 ********* 2025-08-29 17:33:23.918132 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:23.918143 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:23.918154 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:23.918170 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:23.918181 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:23.918192 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:23.918202 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:23.918213 | orchestrator | 2025-08-29 17:33:23.918224 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 17:33:23.918236 | orchestrator | Friday 29 August 2025 17:33:23 +0000 (0:00:03.748) 0:06:37.115 ********* 2025-08-29 17:33:23.918247 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:23.918257 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:23.918268 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:23.918278 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:23.918289 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:23.918307 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:23.918317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:23.918328 | orchestrator | 2025-08-29 17:33:23.918339 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 17:33:23.918366 | orchestrator | Friday 29 August 2025 17:33:23 +0000 (0:00:00.470) 0:06:37.586 ********* 2025-08-29 17:33:23.918378 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 17:33:23.918389 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 17:33:23.918400 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 17:33:23.918410 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 17:33:23.918421 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:23.918431 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 17:33:23.918442 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 17:33:23.918453 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:23.918464 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 17:33:23.918475 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 17:33:23.918493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:42.868153 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 17:33:42.868282 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 17:33:42.868298 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:42.868310 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 17:33:42.868321 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 17:33:42.868332 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:42.868343 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:42.868374 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 17:33:42.868385 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 17:33:42.868396 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:42.868408 | orchestrator | 2025-08-29 17:33:42.868421 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 17:33:42.868433 | orchestrator | Friday 29 August 2025 17:33:24 +0000 (0:00:00.541) 0:06:38.127 ********* 2025-08-29 17:33:42.868444 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:42.868456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:42.868467 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:42.868478 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:42.868488 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:42.868499 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:42.868510 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:42.868521 | orchestrator | 2025-08-29 17:33:42.868533 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 17:33:42.868544 | orchestrator | Friday 29 August 2025 17:33:24 +0000 (0:00:00.478) 0:06:38.606 ********* 2025-08-29 17:33:42.868555 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:42.868566 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:42.868577 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:42.868587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:42.868598 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:42.868609 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:42.868619 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:42.868630 | orchestrator | 2025-08-29 17:33:42.868641 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 17:33:42.868652 | orchestrator | Friday 29 August 2025 17:33:25 +0000 (0:00:00.462) 0:06:39.069 ********* 2025-08-29 17:33:42.868663 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:42.868674 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:33:42.868685 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:33:42.868721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:33:42.868734 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:33:42.868745 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:33:42.868757 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:33:42.868769 | orchestrator | 2025-08-29 17:33:42.868781 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 17:33:42.868794 | orchestrator | Friday 29 August 2025 17:33:25 +0000 (0:00:00.594) 0:06:39.664 ********* 2025-08-29 17:33:42.868806 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.868818 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:33:42.868830 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:33:42.868842 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:33:42.868854 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:42.868866 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:42.868878 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:42.868890 | orchestrator | 2025-08-29 17:33:42.868903 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 17:33:42.868916 | orchestrator | Friday 29 August 2025 17:33:27 +0000 (0:00:01.643) 0:06:41.307 ********* 2025-08-29 17:33:42.868929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:33:42.868943 | orchestrator | 2025-08-29 17:33:42.868956 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 17:33:42.868968 | orchestrator | Friday 29 August 2025 17:33:28 +0000 (0:00:00.796) 0:06:42.103 ********* 2025-08-29 17:33:42.868980 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.868992 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:42.869005 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:42.869017 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:42.869029 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:42.869041 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:42.869052 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:42.869063 | orchestrator | 2025-08-29 17:33:42.869074 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 17:33:42.869085 | orchestrator | Friday 29 August 2025 17:33:28 +0000 (0:00:00.776) 0:06:42.880 ********* 2025-08-29 17:33:42.869095 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.869106 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:42.869117 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:42.869128 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:42.869138 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:42.869149 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:42.869159 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:42.869170 | orchestrator | 2025-08-29 17:33:42.869181 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 17:33:42.869192 | orchestrator | Friday 29 August 2025 17:33:29 +0000 (0:00:00.997) 0:06:43.878 ********* 2025-08-29 17:33:42.869203 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.869214 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:42.869224 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:42.869235 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:42.869246 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:42.869256 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:42.869284 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:42.869295 | orchestrator | 2025-08-29 17:33:42.869306 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 17:33:42.869317 | orchestrator | Friday 29 August 2025 17:33:31 +0000 (0:00:01.294) 0:06:45.173 ********* 2025-08-29 17:33:42.869346 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:33:42.869391 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:33:42.869402 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:33:42.869413 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:33:42.869432 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:42.869443 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:42.869454 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:42.869465 | orchestrator | 2025-08-29 17:33:42.869476 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 17:33:42.869487 | orchestrator | Friday 29 August 2025 17:33:32 +0000 (0:00:01.394) 0:06:46.567 ********* 2025-08-29 17:33:42.869498 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.869509 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:42.869519 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:42.869530 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:42.869541 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:42.869552 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:42.869562 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:42.869573 | orchestrator | 2025-08-29 17:33:42.869584 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 17:33:42.869595 | orchestrator | Friday 29 August 2025 17:33:33 +0000 (0:00:01.278) 0:06:47.845 ********* 2025-08-29 17:33:42.869606 | orchestrator | changed: [testbed-manager] 2025-08-29 17:33:42.869616 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:33:42.869627 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:33:42.869638 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:33:42.869648 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:33:42.869659 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:33:42.869669 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:33:42.869680 | orchestrator | 2025-08-29 17:33:42.869691 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 17:33:42.869702 | orchestrator | Friday 29 August 2025 17:33:35 +0000 (0:00:01.349) 0:06:49.195 ********* 2025-08-29 17:33:42.869713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:33:42.869724 | orchestrator | 2025-08-29 17:33:42.869735 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 17:33:42.869745 | orchestrator | Friday 29 August 2025 17:33:36 +0000 (0:00:00.947) 0:06:50.143 ********* 2025-08-29 17:33:42.869756 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.869767 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:33:42.869777 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:33:42.869788 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:33:42.869799 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:42.869809 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:42.869820 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:42.869831 | orchestrator | 2025-08-29 17:33:42.869841 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 17:33:42.869852 | orchestrator | Friday 29 August 2025 17:33:37 +0000 (0:00:01.409) 0:06:51.552 ********* 2025-08-29 17:33:42.869863 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.869874 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:33:42.869884 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:33:42.869895 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:33:42.869906 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:42.869916 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:42.869927 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:42.869938 | orchestrator | 2025-08-29 17:33:42.869949 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 17:33:42.869960 | orchestrator | Friday 29 August 2025 17:33:38 +0000 (0:00:01.192) 0:06:52.745 ********* 2025-08-29 17:33:42.869971 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.869981 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:33:42.869992 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:33:42.870003 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:33:42.870070 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:42.870083 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:42.870101 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:42.870111 | orchestrator | 2025-08-29 17:33:42.870122 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 17:33:42.870139 | orchestrator | Friday 29 August 2025 17:33:40 +0000 (0:00:01.415) 0:06:54.160 ********* 2025-08-29 17:33:42.870150 | orchestrator | ok: [testbed-manager] 2025-08-29 17:33:42.870193 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:33:42.870205 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:33:42.870216 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:33:42.870230 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:33:42.870247 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:33:42.870267 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:33:42.870286 | orchestrator | 2025-08-29 17:33:42.870298 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 17:33:42.870309 | orchestrator | Friday 29 August 2025 17:33:41 +0000 (0:00:01.194) 0:06:55.355 ********* 2025-08-29 17:33:42.870320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:33:42.870332 | orchestrator | 2025-08-29 17:33:42.870342 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:33:42.870419 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:01.202) 0:06:56.557 ********* 2025-08-29 17:33:42.870431 | orchestrator | 2025-08-29 17:33:42.870442 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:33:42.870453 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.042) 0:06:56.600 ********* 2025-08-29 17:33:42.870464 | orchestrator | 2025-08-29 17:33:42.870475 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:33:42.870485 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.052) 0:06:56.652 ********* 2025-08-29 17:33:42.870496 | orchestrator | 2025-08-29 17:33:42.870507 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:33:42.870518 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.049) 0:06:56.702 ********* 2025-08-29 17:33:42.870529 | orchestrator | 2025-08-29 17:33:42.870548 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:34:08.833147 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.055) 0:06:56.757 ********* 2025-08-29 17:34:08.833241 | orchestrator | 2025-08-29 17:34:08.833250 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:34:08.833258 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.041) 0:06:56.799 ********* 2025-08-29 17:34:08.833264 | orchestrator | 2025-08-29 17:34:08.833271 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 17:34:08.833278 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.049) 0:06:56.848 ********* 2025-08-29 17:34:08.833284 | orchestrator | 2025-08-29 17:34:08.833292 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 17:34:08.833298 | orchestrator | Friday 29 August 2025 17:33:42 +0000 (0:00:00.044) 0:06:56.892 ********* 2025-08-29 17:34:08.833305 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:08.833312 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:08.833319 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:08.833326 | orchestrator | 2025-08-29 17:34:08.833332 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 17:34:08.833339 | orchestrator | Friday 29 August 2025 17:33:44 +0000 (0:00:01.275) 0:06:58.167 ********* 2025-08-29 17:34:08.833345 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:08.833352 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:08.833406 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:08.833414 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:08.833420 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:08.833427 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:08.833433 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:08.833461 | orchestrator | 2025-08-29 17:34:08.833468 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 17:34:08.833474 | orchestrator | Friday 29 August 2025 17:33:45 +0000 (0:00:01.363) 0:06:59.531 ********* 2025-08-29 17:34:08.833480 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:08.833487 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:08.833493 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:08.833499 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:08.833505 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:08.833512 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:08.833518 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:08.833524 | orchestrator | 2025-08-29 17:34:08.833531 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 17:34:08.833537 | orchestrator | Friday 29 August 2025 17:33:46 +0000 (0:00:01.278) 0:07:00.809 ********* 2025-08-29 17:34:08.833543 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:08.833549 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:08.833555 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:08.833562 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:08.833568 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:08.833575 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:08.833581 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:08.833587 | orchestrator | 2025-08-29 17:34:08.833594 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 17:34:08.833601 | orchestrator | Friday 29 August 2025 17:33:49 +0000 (0:00:02.619) 0:07:03.428 ********* 2025-08-29 17:34:08.833607 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:08.833613 | orchestrator | 2025-08-29 17:34:08.833620 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 17:34:08.833626 | orchestrator | Friday 29 August 2025 17:33:49 +0000 (0:00:00.091) 0:07:03.520 ********* 2025-08-29 17:34:08.833632 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:08.833639 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:08.833645 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:08.833651 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:08.833658 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:08.833664 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:08.833670 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:08.833676 | orchestrator | 2025-08-29 17:34:08.833683 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 17:34:08.833703 | orchestrator | Friday 29 August 2025 17:33:50 +0000 (0:00:00.926) 0:07:04.447 ********* 2025-08-29 17:34:08.833710 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:08.833717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:08.833723 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:08.833730 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:08.833737 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:08.833743 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:08.833750 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:08.833756 | orchestrator | 2025-08-29 17:34:08.833763 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 17:34:08.833770 | orchestrator | Friday 29 August 2025 17:33:51 +0000 (0:00:00.630) 0:07:05.078 ********* 2025-08-29 17:34:08.833778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:34:08.833787 | orchestrator | 2025-08-29 17:34:08.833794 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 17:34:08.833801 | orchestrator | Friday 29 August 2025 17:33:51 +0000 (0:00:00.875) 0:07:05.953 ********* 2025-08-29 17:34:08.833807 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:08.833814 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:08.833826 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:08.833832 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:08.833839 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:08.833846 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:08.833853 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:08.833859 | orchestrator | 2025-08-29 17:34:08.833866 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 17:34:08.833873 | orchestrator | Friday 29 August 2025 17:33:52 +0000 (0:00:00.792) 0:07:06.746 ********* 2025-08-29 17:34:08.833880 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 17:34:08.833887 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 17:34:08.833908 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 17:34:08.833915 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 17:34:08.833921 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 17:34:08.833928 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 17:34:08.833934 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 17:34:08.833941 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 17:34:08.833948 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 17:34:08.833955 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 17:34:08.833962 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 17:34:08.833969 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 17:34:08.833976 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 17:34:08.833982 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 17:34:08.833989 | orchestrator | 2025-08-29 17:34:08.833996 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 17:34:08.834002 | orchestrator | Friday 29 August 2025 17:33:55 +0000 (0:00:02.502) 0:07:09.249 ********* 2025-08-29 17:34:08.834009 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:08.834057 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:08.834065 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:08.834072 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:08.834079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:08.834086 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:08.834092 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:08.834099 | orchestrator | 2025-08-29 17:34:08.834105 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 17:34:08.834112 | orchestrator | Friday 29 August 2025 17:33:55 +0000 (0:00:00.471) 0:07:09.720 ********* 2025-08-29 17:34:08.834120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:34:08.834128 | orchestrator | 2025-08-29 17:34:08.834135 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 17:34:08.834141 | orchestrator | Friday 29 August 2025 17:33:56 +0000 (0:00:00.758) 0:07:10.478 ********* 2025-08-29 17:34:08.834169 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:08.834176 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:08.834182 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:08.834189 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:08.834195 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:08.834202 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:08.834208 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:08.834215 | orchestrator | 2025-08-29 17:34:08.834221 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 17:34:08.834228 | orchestrator | Friday 29 August 2025 17:33:57 +0000 (0:00:00.952) 0:07:11.431 ********* 2025-08-29 17:34:08.834234 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:08.834246 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:08.834252 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:08.834259 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:08.834265 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:08.834272 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:08.834278 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:08.834285 | orchestrator | 2025-08-29 17:34:08.834291 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 17:34:08.834298 | orchestrator | Friday 29 August 2025 17:33:58 +0000 (0:00:00.836) 0:07:12.268 ********* 2025-08-29 17:34:08.834304 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:08.834311 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:08.834317 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:08.834324 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:08.834335 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:08.834341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:08.834348 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:08.834354 | orchestrator | 2025-08-29 17:34:08.834376 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 17:34:08.834382 | orchestrator | Friday 29 August 2025 17:33:58 +0000 (0:00:00.522) 0:07:12.791 ********* 2025-08-29 17:34:08.834387 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:08.834393 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:08.834399 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:08.834406 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:08.834412 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:08.834419 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:08.834425 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:08.834431 | orchestrator | 2025-08-29 17:34:08.834437 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 17:34:08.834444 | orchestrator | Friday 29 August 2025 17:34:00 +0000 (0:00:01.407) 0:07:14.198 ********* 2025-08-29 17:34:08.834450 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:08.834456 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:08.834462 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:08.834469 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:08.834475 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:08.834481 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:08.834487 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:08.834493 | orchestrator | 2025-08-29 17:34:08.834500 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 17:34:08.834506 | orchestrator | Friday 29 August 2025 17:34:00 +0000 (0:00:00.501) 0:07:14.700 ********* 2025-08-29 17:34:08.834512 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:08.834518 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:08.834525 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:08.834531 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:08.834537 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:08.834543 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:08.834549 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:08.834555 | orchestrator | 2025-08-29 17:34:08.834569 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 17:34:43.226421 | orchestrator | Friday 29 August 2025 17:34:08 +0000 (0:00:08.165) 0:07:22.865 ********* 2025-08-29 17:34:43.226503 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226511 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:43.226518 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:43.226523 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:43.226529 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:43.226534 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:43.226540 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:43.226545 | orchestrator | 2025-08-29 17:34:43.226551 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 17:34:43.226557 | orchestrator | Friday 29 August 2025 17:34:10 +0000 (0:00:01.443) 0:07:24.309 ********* 2025-08-29 17:34:43.226582 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226588 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:43.226594 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:43.226599 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:43.226604 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:43.226609 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:43.226614 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:43.226619 | orchestrator | 2025-08-29 17:34:43.226625 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 17:34:43.226630 | orchestrator | Friday 29 August 2025 17:34:12 +0000 (0:00:01.781) 0:07:26.090 ********* 2025-08-29 17:34:43.226635 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226640 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:43.226645 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:43.226650 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:43.226655 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:43.226660 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:43.226665 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:43.226670 | orchestrator | 2025-08-29 17:34:43.226675 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 17:34:43.226680 | orchestrator | Friday 29 August 2025 17:34:13 +0000 (0:00:01.686) 0:07:27.777 ********* 2025-08-29 17:34:43.226685 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226691 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.226696 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.226701 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.226706 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.226711 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.226716 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.226721 | orchestrator | 2025-08-29 17:34:43.226726 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 17:34:43.226731 | orchestrator | Friday 29 August 2025 17:34:14 +0000 (0:00:00.959) 0:07:28.737 ********* 2025-08-29 17:34:43.226737 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:43.226742 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:43.226747 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:43.226752 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:43.226757 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:43.226762 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:43.226767 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:43.226772 | orchestrator | 2025-08-29 17:34:43.226777 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 17:34:43.226782 | orchestrator | Friday 29 August 2025 17:34:15 +0000 (0:00:00.768) 0:07:29.506 ********* 2025-08-29 17:34:43.226787 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:43.226792 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:43.226797 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:43.226802 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:43.226807 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:43.226812 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:43.226817 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:43.226822 | orchestrator | 2025-08-29 17:34:43.226827 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 17:34:43.226832 | orchestrator | Friday 29 August 2025 17:34:15 +0000 (0:00:00.457) 0:07:29.964 ********* 2025-08-29 17:34:43.226837 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226842 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.226847 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.226862 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.226867 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.226872 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.226877 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.226882 | orchestrator | 2025-08-29 17:34:43.226887 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 17:34:43.226898 | orchestrator | Friday 29 August 2025 17:34:16 +0000 (0:00:00.656) 0:07:30.620 ********* 2025-08-29 17:34:43.226903 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226908 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.226913 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.226918 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.226923 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.226928 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.226933 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.226938 | orchestrator | 2025-08-29 17:34:43.226943 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 17:34:43.226948 | orchestrator | Friday 29 August 2025 17:34:17 +0000 (0:00:00.464) 0:07:31.084 ********* 2025-08-29 17:34:43.226953 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.226958 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.226963 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.226968 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.226973 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.226978 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.226983 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.226988 | orchestrator | 2025-08-29 17:34:43.226993 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 17:34:43.226998 | orchestrator | Friday 29 August 2025 17:34:17 +0000 (0:00:00.537) 0:07:31.622 ********* 2025-08-29 17:34:43.227004 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.227009 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.227014 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.227019 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.227023 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.227028 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.227033 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.227039 | orchestrator | 2025-08-29 17:34:43.227044 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 17:34:43.227059 | orchestrator | Friday 29 August 2025 17:34:23 +0000 (0:00:06.100) 0:07:37.722 ********* 2025-08-29 17:34:43.227065 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:43.227070 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:43.227075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:43.227080 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:43.227085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:43.227090 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:43.227095 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:43.227100 | orchestrator | 2025-08-29 17:34:43.227105 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 17:34:43.227110 | orchestrator | Friday 29 August 2025 17:34:24 +0000 (0:00:00.609) 0:07:38.332 ********* 2025-08-29 17:34:43.227116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:34:43.227123 | orchestrator | 2025-08-29 17:34:43.227128 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 17:34:43.227134 | orchestrator | Friday 29 August 2025 17:34:25 +0000 (0:00:01.206) 0:07:39.539 ********* 2025-08-29 17:34:43.227139 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.227144 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.227149 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.227154 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.227159 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.227164 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.227169 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.227174 | orchestrator | 2025-08-29 17:34:43.227179 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 17:34:43.227184 | orchestrator | Friday 29 August 2025 17:34:27 +0000 (0:00:02.013) 0:07:41.553 ********* 2025-08-29 17:34:43.227194 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.227199 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.227204 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.227209 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.227214 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.227219 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.227224 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.227229 | orchestrator | 2025-08-29 17:34:43.227234 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 17:34:43.227239 | orchestrator | Friday 29 August 2025 17:34:28 +0000 (0:00:01.289) 0:07:42.842 ********* 2025-08-29 17:34:43.227244 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:43.227249 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:43.227254 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:43.227259 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:43.227264 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:43.227269 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:43.227274 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:43.227279 | orchestrator | 2025-08-29 17:34:43.227284 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 17:34:43.227289 | orchestrator | Friday 29 August 2025 17:34:29 +0000 (0:00:01.193) 0:07:44.036 ********* 2025-08-29 17:34:43.227295 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227301 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227306 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227311 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227317 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227322 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227327 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 17:34:43.227332 | orchestrator | 2025-08-29 17:34:43.227337 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 17:34:43.227342 | orchestrator | Friday 29 August 2025 17:34:31 +0000 (0:00:01.909) 0:07:45.946 ********* 2025-08-29 17:34:43.227348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:34:43.227353 | orchestrator | 2025-08-29 17:34:43.227358 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 17:34:43.227363 | orchestrator | Friday 29 August 2025 17:34:32 +0000 (0:00:00.914) 0:07:46.860 ********* 2025-08-29 17:34:43.227388 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:43.227393 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:43.227398 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:43.227404 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:43.227409 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:43.227414 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:43.227419 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:43.227424 | orchestrator | 2025-08-29 17:34:43.227429 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 17:34:43.227438 | orchestrator | Friday 29 August 2025 17:34:43 +0000 (0:00:10.392) 0:07:57.253 ********* 2025-08-29 17:34:59.592178 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:59.592286 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:59.592301 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:59.592312 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:59.592321 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:59.592332 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:59.592341 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:59.592352 | orchestrator | 2025-08-29 17:34:59.592363 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 17:34:59.592402 | orchestrator | Friday 29 August 2025 17:34:45 +0000 (0:00:01.844) 0:07:59.097 ********* 2025-08-29 17:34:59.592413 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:59.592423 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:59.592481 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:59.592492 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:59.592502 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:59.592511 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:59.592521 | orchestrator | 2025-08-29 17:34:59.592531 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 17:34:59.592541 | orchestrator | Friday 29 August 2025 17:34:46 +0000 (0:00:01.291) 0:08:00.388 ********* 2025-08-29 17:34:59.592551 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:59.592561 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:59.592571 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:59.592580 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:59.592590 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:59.592599 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:59.592609 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:59.592618 | orchestrator | 2025-08-29 17:34:59.592628 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 17:34:59.592637 | orchestrator | 2025-08-29 17:34:59.592647 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 17:34:59.592656 | orchestrator | Friday 29 August 2025 17:34:47 +0000 (0:00:01.335) 0:08:01.724 ********* 2025-08-29 17:34:59.592667 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:59.592678 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:59.592689 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:59.592699 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:59.592710 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:59.592721 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:59.592733 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:59.592744 | orchestrator | 2025-08-29 17:34:59.592755 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 17:34:59.592765 | orchestrator | 2025-08-29 17:34:59.592776 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 17:34:59.592787 | orchestrator | Friday 29 August 2025 17:34:48 +0000 (0:00:00.533) 0:08:02.257 ********* 2025-08-29 17:34:59.592798 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:59.592809 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:59.592820 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:59.592830 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:59.592841 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:59.592853 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:59.592863 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:59.592874 | orchestrator | 2025-08-29 17:34:59.592885 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 17:34:59.592896 | orchestrator | Friday 29 August 2025 17:34:49 +0000 (0:00:01.263) 0:08:03.521 ********* 2025-08-29 17:34:59.592907 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:59.592919 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:59.592930 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:59.592940 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:59.592950 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:59.592959 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:59.592989 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:59.592999 | orchestrator | 2025-08-29 17:34:59.593009 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 17:34:59.593018 | orchestrator | Friday 29 August 2025 17:34:50 +0000 (0:00:01.385) 0:08:04.906 ********* 2025-08-29 17:34:59.593028 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:34:59.593037 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:34:59.593051 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:34:59.593061 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:34:59.593071 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:34:59.593080 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:34:59.593089 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:34:59.593099 | orchestrator | 2025-08-29 17:34:59.593108 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 17:34:59.593118 | orchestrator | Friday 29 August 2025 17:34:51 +0000 (0:00:01.114) 0:08:06.021 ********* 2025-08-29 17:34:59.593127 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:59.593137 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:59.593146 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:59.593156 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:59.593165 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:59.593174 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:59.593183 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:59.593193 | orchestrator | 2025-08-29 17:34:59.593202 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 17:34:59.593212 | orchestrator | 2025-08-29 17:34:59.593222 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 17:34:59.593231 | orchestrator | Friday 29 August 2025 17:34:53 +0000 (0:00:01.211) 0:08:07.233 ********* 2025-08-29 17:34:59.593241 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:34:59.593252 | orchestrator | 2025-08-29 17:34:59.593262 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 17:34:59.593271 | orchestrator | Friday 29 August 2025 17:34:54 +0000 (0:00:01.055) 0:08:08.288 ********* 2025-08-29 17:34:59.593281 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:59.593290 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:59.593300 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:59.593309 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:59.593319 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:59.593328 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:59.593337 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:59.593347 | orchestrator | 2025-08-29 17:34:59.593466 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 17:34:59.593493 | orchestrator | Friday 29 August 2025 17:34:55 +0000 (0:00:00.890) 0:08:09.178 ********* 2025-08-29 17:34:59.593509 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:59.593526 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:59.593541 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:59.593557 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:59.593572 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:59.593588 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:59.593604 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:59.593621 | orchestrator | 2025-08-29 17:34:59.593638 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 17:34:59.593655 | orchestrator | Friday 29 August 2025 17:34:56 +0000 (0:00:01.212) 0:08:10.390 ********* 2025-08-29 17:34:59.593674 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:34:59.593692 | orchestrator | 2025-08-29 17:34:59.593704 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 17:34:59.593714 | orchestrator | Friday 29 August 2025 17:34:57 +0000 (0:00:01.063) 0:08:11.454 ********* 2025-08-29 17:34:59.593736 | orchestrator | ok: [testbed-manager] 2025-08-29 17:34:59.593746 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:34:59.593756 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:34:59.593765 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:34:59.593775 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:34:59.593784 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:34:59.593794 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:34:59.593804 | orchestrator | 2025-08-29 17:34:59.593814 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 17:34:59.593823 | orchestrator | Friday 29 August 2025 17:34:58 +0000 (0:00:00.970) 0:08:12.425 ********* 2025-08-29 17:34:59.593833 | orchestrator | changed: [testbed-manager] 2025-08-29 17:34:59.593843 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:34:59.593852 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:34:59.593862 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:34:59.593872 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:34:59.593881 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:34:59.593891 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:34:59.593900 | orchestrator | 2025-08-29 17:34:59.593910 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:34:59.593921 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 17:34:59.593931 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 17:34:59.593941 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:34:59.593951 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:34:59.593961 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:34:59.593970 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:34:59.593981 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 17:34:59.593990 | orchestrator | 2025-08-29 17:34:59.594000 | orchestrator | 2025-08-29 17:34:59.594010 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:34:59.594075 | orchestrator | Friday 29 August 2025 17:34:59 +0000 (0:00:01.186) 0:08:13.611 ********* 2025-08-29 17:34:59.594086 | orchestrator | =============================================================================== 2025-08-29 17:34:59.594096 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.35s 2025-08-29 17:34:59.594106 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.37s 2025-08-29 17:34:59.594115 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.94s 2025-08-29 17:34:59.594125 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.18s 2025-08-29 17:34:59.594135 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 14.09s 2025-08-29 17:34:59.594145 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.80s 2025-08-29 17:34:59.594155 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.08s 2025-08-29 17:34:59.594165 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.39s 2025-08-29 17:34:59.594174 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.33s 2025-08-29 17:34:59.594190 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.41s 2025-08-29 17:34:59.594200 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.14s 2025-08-29 17:34:59.594210 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.47s 2025-08-29 17:34:59.594219 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.17s 2025-08-29 17:34:59.594229 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.99s 2025-08-29 17:34:59.594251 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.92s 2025-08-29 17:35:00.186136 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.90s 2025-08-29 17:35:00.186224 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.61s 2025-08-29 17:35:00.186238 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.16s 2025-08-29 17:35:00.186267 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 6.10s 2025-08-29 17:35:00.186281 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.97s 2025-08-29 17:35:00.565449 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 17:35:00.565573 | orchestrator | + osism apply network 2025-08-29 17:35:13.833907 | orchestrator | 2025-08-29 17:35:13 | INFO  | Task d9ef3c9d-c5d9-4f0d-aba3-31fbe5204e53 (network) was prepared for execution. 2025-08-29 17:35:13.834106 | orchestrator | 2025-08-29 17:35:13 | INFO  | It takes a moment until task d9ef3c9d-c5d9-4f0d-aba3-31fbe5204e53 (network) has been started and output is visible here. 2025-08-29 17:35:44.593240 | orchestrator | 2025-08-29 17:35:44.593323 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 17:35:44.593331 | orchestrator | 2025-08-29 17:35:44.593341 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 17:35:44.593348 | orchestrator | Friday 29 August 2025 17:35:18 +0000 (0:00:00.325) 0:00:00.325 ********* 2025-08-29 17:35:44.593353 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.593359 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.593365 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.593415 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.593421 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.593426 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.593431 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.593437 | orchestrator | 2025-08-29 17:35:44.593442 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 17:35:44.593447 | orchestrator | Friday 29 August 2025 17:35:19 +0000 (0:00:00.803) 0:00:01.129 ********* 2025-08-29 17:35:44.593454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:35:44.593461 | orchestrator | 2025-08-29 17:35:44.593466 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 17:35:44.593471 | orchestrator | Friday 29 August 2025 17:35:20 +0000 (0:00:01.267) 0:00:02.396 ********* 2025-08-29 17:35:44.593476 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.593481 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.593486 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.593490 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.593495 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.593500 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.593505 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.593510 | orchestrator | 2025-08-29 17:35:44.593515 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 17:35:44.593520 | orchestrator | Friday 29 August 2025 17:35:22 +0000 (0:00:01.750) 0:00:04.147 ********* 2025-08-29 17:35:44.593524 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.593529 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.593534 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.593556 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.593561 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.593566 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.593576 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.593582 | orchestrator | 2025-08-29 17:35:44.593587 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 17:35:44.593592 | orchestrator | Friday 29 August 2025 17:35:23 +0000 (0:00:01.658) 0:00:05.805 ********* 2025-08-29 17:35:44.593608 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 17:35:44.593613 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 17:35:44.593618 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 17:35:44.593623 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 17:35:44.593628 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 17:35:44.593632 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 17:35:44.593637 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 17:35:44.593642 | orchestrator | 2025-08-29 17:35:44.593647 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 17:35:44.593652 | orchestrator | Friday 29 August 2025 17:35:24 +0000 (0:00:01.042) 0:00:06.848 ********* 2025-08-29 17:35:44.593657 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 17:35:44.593662 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:35:44.593667 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:35:44.593672 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:35:44.593677 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 17:35:44.593681 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:35:44.593686 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:35:44.593691 | orchestrator | 2025-08-29 17:35:44.593696 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 17:35:44.593701 | orchestrator | Friday 29 August 2025 17:35:28 +0000 (0:00:04.026) 0:00:10.874 ********* 2025-08-29 17:35:44.593706 | orchestrator | changed: [testbed-manager] 2025-08-29 17:35:44.593711 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:44.593715 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:44.593720 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:44.593725 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:35:44.593730 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:35:44.593734 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:35:44.593739 | orchestrator | 2025-08-29 17:35:44.593744 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 17:35:44.593749 | orchestrator | Friday 29 August 2025 17:35:30 +0000 (0:00:01.664) 0:00:12.539 ********* 2025-08-29 17:35:44.593754 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:35:44.593759 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 17:35:44.593763 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 17:35:44.593768 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 17:35:44.593773 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 17:35:44.593778 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 17:35:44.593782 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 17:35:44.593787 | orchestrator | 2025-08-29 17:35:44.593792 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 17:35:44.593797 | orchestrator | Friday 29 August 2025 17:35:32 +0000 (0:00:02.498) 0:00:15.037 ********* 2025-08-29 17:35:44.593803 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.593808 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.593814 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.593819 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.593824 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.593830 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.593835 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.593840 | orchestrator | 2025-08-29 17:35:44.593846 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 17:35:44.593866 | orchestrator | Friday 29 August 2025 17:35:34 +0000 (0:00:01.216) 0:00:16.253 ********* 2025-08-29 17:35:44.593872 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:35:44.593878 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:44.593884 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:44.593889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:44.593894 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:35:44.593899 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:35:44.593905 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:35:44.593910 | orchestrator | 2025-08-29 17:35:44.593916 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 17:35:44.593921 | orchestrator | Friday 29 August 2025 17:35:34 +0000 (0:00:00.807) 0:00:17.060 ********* 2025-08-29 17:35:44.593926 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.593932 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.593937 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.593942 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.593947 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.593952 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.593958 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.593963 | orchestrator | 2025-08-29 17:35:44.593968 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 17:35:44.593974 | orchestrator | Friday 29 August 2025 17:35:37 +0000 (0:00:02.246) 0:00:19.307 ********* 2025-08-29 17:35:44.593979 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:35:44.593985 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:35:44.593990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:35:44.593995 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:35:44.594001 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:35:44.594007 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:35:44.594105 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 17:35:44.594113 | orchestrator | 2025-08-29 17:35:44.594119 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 17:35:44.594124 | orchestrator | Friday 29 August 2025 17:35:38 +0000 (0:00:00.924) 0:00:20.231 ********* 2025-08-29 17:35:44.594129 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.594135 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:35:44.594141 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:35:44.594146 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:35:44.594152 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:35:44.594157 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:35:44.594162 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:35:44.594168 | orchestrator | 2025-08-29 17:35:44.594173 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 17:35:44.594179 | orchestrator | Friday 29 August 2025 17:35:39 +0000 (0:00:01.794) 0:00:22.026 ********* 2025-08-29 17:35:44.594188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:35:44.594195 | orchestrator | 2025-08-29 17:35:44.594200 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 17:35:44.594205 | orchestrator | Friday 29 August 2025 17:35:41 +0000 (0:00:01.404) 0:00:23.430 ********* 2025-08-29 17:35:44.594210 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.594215 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.594220 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.594224 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.594229 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.594234 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.594239 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.594244 | orchestrator | 2025-08-29 17:35:44.594264 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 17:35:44.594269 | orchestrator | Friday 29 August 2025 17:35:42 +0000 (0:00:00.997) 0:00:24.428 ********* 2025-08-29 17:35:44.594274 | orchestrator | ok: [testbed-manager] 2025-08-29 17:35:44.594279 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:35:44.594284 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:35:44.594288 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:35:44.594293 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:35:44.594298 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:35:44.594303 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:35:44.594308 | orchestrator | 2025-08-29 17:35:44.594313 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 17:35:44.594318 | orchestrator | Friday 29 August 2025 17:35:43 +0000 (0:00:00.921) 0:00:25.349 ********* 2025-08-29 17:35:44.594323 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594328 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594332 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594337 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594342 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594347 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594352 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594357 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594362 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594367 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 17:35:44.594411 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594417 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594421 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594426 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 17:35:44.594431 | orchestrator | 2025-08-29 17:35:44.594441 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 17:36:03.137596 | orchestrator | Friday 29 August 2025 17:35:44 +0000 (0:00:01.294) 0:00:26.644 ********* 2025-08-29 17:36:03.137699 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:36:03.137712 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:36:03.137720 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:36:03.137728 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:36:03.137736 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:36:03.137744 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:36:03.137751 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:36:03.137759 | orchestrator | 2025-08-29 17:36:03.137768 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 17:36:03.137776 | orchestrator | Friday 29 August 2025 17:35:45 +0000 (0:00:00.674) 0:00:27.318 ********* 2025-08-29 17:36:03.137789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-5, testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-4, testbed-node-3 2025-08-29 17:36:03.137798 | orchestrator | 2025-08-29 17:36:03.137806 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 17:36:03.137813 | orchestrator | Friday 29 August 2025 17:35:50 +0000 (0:00:04.783) 0:00:32.102 ********* 2025-08-29 17:36:03.137823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.137875 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137915 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137922 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.137930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.137939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.137949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.137981 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.137993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138073 | orchestrator | 2025-08-29 17:36:03.138086 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 17:36:03.138109 | orchestrator | Friday 29 August 2025 17:35:56 +0000 (0:00:06.466) 0:00:38.569 ********* 2025-08-29 17:36:03.138122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138149 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 17:36:03.138217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138260 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:03.138300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:10.297238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 17:36:10.297421 | orchestrator | 2025-08-29 17:36:10.297441 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 17:36:10.297454 | orchestrator | Friday 29 August 2025 17:36:03 +0000 (0:00:06.616) 0:00:45.185 ********* 2025-08-29 17:36:10.297467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:36:10.297479 | orchestrator | 2025-08-29 17:36:10.297491 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 17:36:10.297501 | orchestrator | Friday 29 August 2025 17:36:04 +0000 (0:00:01.453) 0:00:46.639 ********* 2025-08-29 17:36:10.297513 | orchestrator | ok: [testbed-manager] 2025-08-29 17:36:10.297525 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:36:10.297535 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:36:10.297546 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:36:10.297557 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:36:10.297567 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:36:10.297578 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:36:10.297589 | orchestrator | 2025-08-29 17:36:10.297600 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 17:36:10.297611 | orchestrator | Friday 29 August 2025 17:36:05 +0000 (0:00:01.310) 0:00:47.949 ********* 2025-08-29 17:36:10.297635 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.297647 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.297658 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.297669 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.297680 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:36:10.297707 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.297718 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.297729 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.297740 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.297754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:36:10.297765 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.297778 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.297791 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.297803 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.297815 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:36:10.297828 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.297840 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.297852 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.297864 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.297876 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:36:10.297888 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.297900 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.297912 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.297933 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.297944 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.297955 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.297966 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.297977 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.297987 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:36:10.297998 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:36:10.298009 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 17:36:10.298099 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 17:36:10.298119 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 17:36:10.298140 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 17:36:10.298159 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:36:10.298176 | orchestrator | 2025-08-29 17:36:10.298188 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 17:36:10.298218 | orchestrator | Friday 29 August 2025 17:36:08 +0000 (0:00:02.313) 0:00:50.262 ********* 2025-08-29 17:36:10.298229 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:36:10.298241 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:36:10.298251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:36:10.298262 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:36:10.298273 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:36:10.298283 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:36:10.298294 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:36:10.298304 | orchestrator | 2025-08-29 17:36:10.298315 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 17:36:10.298326 | orchestrator | Friday 29 August 2025 17:36:08 +0000 (0:00:00.687) 0:00:50.950 ********* 2025-08-29 17:36:10.298336 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:36:10.298347 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:36:10.298358 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:36:10.298393 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:36:10.298405 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:36:10.298416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:36:10.298426 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:36:10.298437 | orchestrator | 2025-08-29 17:36:10.298448 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:36:10.298461 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:36:10.298473 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:36:10.298484 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:36:10.298495 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:36:10.298506 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:36:10.298523 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:36:10.298534 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 17:36:10.298554 | orchestrator | 2025-08-29 17:36:10.298565 | orchestrator | 2025-08-29 17:36:10.298576 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:36:10.298586 | orchestrator | Friday 29 August 2025 17:36:09 +0000 (0:00:00.772) 0:00:51.722 ********* 2025-08-29 17:36:10.298597 | orchestrator | =============================================================================== 2025-08-29 17:36:10.298608 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.62s 2025-08-29 17:36:10.298618 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.47s 2025-08-29 17:36:10.298629 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.78s 2025-08-29 17:36:10.298639 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 4.03s 2025-08-29 17:36:10.298650 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.50s 2025-08-29 17:36:10.298661 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.31s 2025-08-29 17:36:10.298671 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.25s 2025-08-29 17:36:10.298682 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.79s 2025-08-29 17:36:10.298693 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.75s 2025-08-29 17:36:10.298703 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2025-08-29 17:36:10.298714 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.66s 2025-08-29 17:36:10.298724 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.45s 2025-08-29 17:36:10.298735 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.40s 2025-08-29 17:36:10.298745 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.31s 2025-08-29 17:36:10.298756 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.29s 2025-08-29 17:36:10.298766 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2025-08-29 17:36:10.298777 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.22s 2025-08-29 17:36:10.298788 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2025-08-29 17:36:10.298798 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-08-29 17:36:10.298809 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.92s 2025-08-29 17:36:10.662646 | orchestrator | + osism apply wireguard 2025-08-29 17:36:22.880881 | orchestrator | 2025-08-29 17:36:22 | INFO  | Task c3030611-ac43-4047-bf11-ef438117bfb0 (wireguard) was prepared for execution. 2025-08-29 17:36:22.880991 | orchestrator | 2025-08-29 17:36:22 | INFO  | It takes a moment until task c3030611-ac43-4047-bf11-ef438117bfb0 (wireguard) has been started and output is visible here. 2025-08-29 17:36:45.328504 | orchestrator | 2025-08-29 17:36:45.328621 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 17:36:45.328638 | orchestrator | 2025-08-29 17:36:45.328651 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 17:36:45.328663 | orchestrator | Friday 29 August 2025 17:36:27 +0000 (0:00:00.247) 0:00:00.247 ********* 2025-08-29 17:36:45.328674 | orchestrator | ok: [testbed-manager] 2025-08-29 17:36:45.328686 | orchestrator | 2025-08-29 17:36:45.328698 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 17:36:45.328708 | orchestrator | Friday 29 August 2025 17:36:29 +0000 (0:00:01.963) 0:00:02.211 ********* 2025-08-29 17:36:45.328719 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.328731 | orchestrator | 2025-08-29 17:36:45.328742 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 17:36:45.328752 | orchestrator | Friday 29 August 2025 17:36:36 +0000 (0:00:07.595) 0:00:09.806 ********* 2025-08-29 17:36:45.328788 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.328799 | orchestrator | 2025-08-29 17:36:45.328810 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 17:36:45.328821 | orchestrator | Friday 29 August 2025 17:36:37 +0000 (0:00:00.625) 0:00:10.432 ********* 2025-08-29 17:36:45.328832 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.328843 | orchestrator | 2025-08-29 17:36:45.328854 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 17:36:45.328864 | orchestrator | Friday 29 August 2025 17:36:38 +0000 (0:00:00.481) 0:00:10.913 ********* 2025-08-29 17:36:45.328875 | orchestrator | ok: [testbed-manager] 2025-08-29 17:36:45.328886 | orchestrator | 2025-08-29 17:36:45.328896 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 17:36:45.328907 | orchestrator | Friday 29 August 2025 17:36:38 +0000 (0:00:00.555) 0:00:11.469 ********* 2025-08-29 17:36:45.328918 | orchestrator | ok: [testbed-manager] 2025-08-29 17:36:45.328928 | orchestrator | 2025-08-29 17:36:45.328939 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 17:36:45.328950 | orchestrator | Friday 29 August 2025 17:36:39 +0000 (0:00:00.587) 0:00:12.056 ********* 2025-08-29 17:36:45.328960 | orchestrator | ok: [testbed-manager] 2025-08-29 17:36:45.328971 | orchestrator | 2025-08-29 17:36:45.328982 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 17:36:45.329008 | orchestrator | Friday 29 August 2025 17:36:39 +0000 (0:00:00.472) 0:00:12.529 ********* 2025-08-29 17:36:45.329021 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.329034 | orchestrator | 2025-08-29 17:36:45.329046 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 17:36:45.329058 | orchestrator | Friday 29 August 2025 17:36:41 +0000 (0:00:01.330) 0:00:13.859 ********* 2025-08-29 17:36:45.329069 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 17:36:45.329083 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.329095 | orchestrator | 2025-08-29 17:36:45.329107 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 17:36:45.329119 | orchestrator | Friday 29 August 2025 17:36:42 +0000 (0:00:01.013) 0:00:14.873 ********* 2025-08-29 17:36:45.329131 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.329142 | orchestrator | 2025-08-29 17:36:45.329155 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 17:36:45.329167 | orchestrator | Friday 29 August 2025 17:36:43 +0000 (0:00:01.796) 0:00:16.669 ********* 2025-08-29 17:36:45.329179 | orchestrator | changed: [testbed-manager] 2025-08-29 17:36:45.329192 | orchestrator | 2025-08-29 17:36:45.329204 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:36:45.329217 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:36:45.329231 | orchestrator | 2025-08-29 17:36:45.329243 | orchestrator | 2025-08-29 17:36:45.329256 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:36:45.329269 | orchestrator | Friday 29 August 2025 17:36:44 +0000 (0:00:01.054) 0:00:17.724 ********* 2025-08-29 17:36:45.329280 | orchestrator | =============================================================================== 2025-08-29 17:36:45.329293 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.60s 2025-08-29 17:36:45.329305 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.96s 2025-08-29 17:36:45.329317 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.80s 2025-08-29 17:36:45.329329 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.33s 2025-08-29 17:36:45.329341 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.05s 2025-08-29 17:36:45.329354 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2025-08-29 17:36:45.329367 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.63s 2025-08-29 17:36:45.329412 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.59s 2025-08-29 17:36:45.329424 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-08-29 17:36:45.329434 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2025-08-29 17:36:45.329445 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.47s 2025-08-29 17:36:45.688123 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 17:36:45.729996 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 17:36:45.730157 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 17:36:45.815153 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 163 0 --:--:-- --:--:-- --:--:-- 164 2025-08-29 17:36:45.831913 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 17:36:48.015292 | orchestrator | 2025-08-29 17:36:48 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 17:36:58.156135 | orchestrator | 2025-08-29 17:36:58 | INFO  | Task 73ed82f2-52cb-44de-a99d-fc714f64ebc5 (workarounds) was prepared for execution. 2025-08-29 17:36:58.156253 | orchestrator | 2025-08-29 17:36:58 | INFO  | It takes a moment until task 73ed82f2-52cb-44de-a99d-fc714f64ebc5 (workarounds) has been started and output is visible here. 2025-08-29 17:37:23.569807 | orchestrator | 2025-08-29 17:37:23.569920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:37:23.569937 | orchestrator | 2025-08-29 17:37:23.569949 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 17:37:23.569961 | orchestrator | Friday 29 August 2025 17:37:02 +0000 (0:00:00.168) 0:00:00.168 ********* 2025-08-29 17:37:23.569972 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 17:37:23.569984 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 17:37:23.569995 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 17:37:23.570006 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 17:37:23.570075 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 17:37:23.570087 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 17:37:23.570098 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 17:37:23.570109 | orchestrator | 2025-08-29 17:37:23.570121 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 17:37:23.570132 | orchestrator | 2025-08-29 17:37:23.570143 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 17:37:23.570154 | orchestrator | Friday 29 August 2025 17:37:03 +0000 (0:00:00.800) 0:00:00.969 ********* 2025-08-29 17:37:23.570165 | orchestrator | ok: [testbed-manager] 2025-08-29 17:37:23.570177 | orchestrator | 2025-08-29 17:37:23.570189 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 17:37:23.570200 | orchestrator | 2025-08-29 17:37:23.570237 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 17:37:23.570249 | orchestrator | Friday 29 August 2025 17:37:05 +0000 (0:00:02.583) 0:00:03.553 ********* 2025-08-29 17:37:23.570261 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:37:23.570272 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:37:23.570283 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:37:23.570294 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:37:23.570305 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:37:23.570316 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:37:23.570327 | orchestrator | 2025-08-29 17:37:23.570338 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 17:37:23.570350 | orchestrator | 2025-08-29 17:37:23.570425 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 17:37:23.570438 | orchestrator | Friday 29 August 2025 17:37:07 +0000 (0:00:01.812) 0:00:05.365 ********* 2025-08-29 17:37:23.570450 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:37:23.570462 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:37:23.570473 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:37:23.570484 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:37:23.570495 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:37:23.570506 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 17:37:23.570516 | orchestrator | 2025-08-29 17:37:23.570527 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 17:37:23.570538 | orchestrator | Friday 29 August 2025 17:37:09 +0000 (0:00:01.462) 0:00:06.828 ********* 2025-08-29 17:37:23.570549 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:37:23.570560 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:37:23.570571 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:37:23.570582 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:37:23.570593 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:37:23.570603 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:37:23.570614 | orchestrator | 2025-08-29 17:37:23.570625 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 17:37:23.570635 | orchestrator | Friday 29 August 2025 17:37:12 +0000 (0:00:03.844) 0:00:10.673 ********* 2025-08-29 17:37:23.570646 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:37:23.570657 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:37:23.570668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:37:23.570678 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:37:23.570689 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:37:23.570700 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:37:23.570711 | orchestrator | 2025-08-29 17:37:23.570722 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 17:37:23.570733 | orchestrator | 2025-08-29 17:37:23.570744 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 17:37:23.570755 | orchestrator | Friday 29 August 2025 17:37:13 +0000 (0:00:00.775) 0:00:11.448 ********* 2025-08-29 17:37:23.570766 | orchestrator | changed: [testbed-manager] 2025-08-29 17:37:23.570777 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:37:23.570787 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:37:23.570798 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:37:23.570809 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:37:23.570819 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:37:23.570830 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:37:23.570841 | orchestrator | 2025-08-29 17:37:23.570852 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 17:37:23.570863 | orchestrator | Friday 29 August 2025 17:37:15 +0000 (0:00:01.642) 0:00:13.090 ********* 2025-08-29 17:37:23.570873 | orchestrator | changed: [testbed-manager] 2025-08-29 17:37:23.570884 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:37:23.570895 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:37:23.570906 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:37:23.570917 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:37:23.570927 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:37:23.570957 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:37:23.570969 | orchestrator | 2025-08-29 17:37:23.570980 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 17:37:23.570991 | orchestrator | Friday 29 August 2025 17:37:16 +0000 (0:00:01.520) 0:00:14.610 ********* 2025-08-29 17:37:23.571010 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:37:23.571021 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:37:23.571032 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:37:23.571043 | orchestrator | ok: [testbed-manager] 2025-08-29 17:37:23.571053 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:37:23.571064 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:37:23.571075 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:37:23.571086 | orchestrator | 2025-08-29 17:37:23.571097 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 17:37:23.571107 | orchestrator | Friday 29 August 2025 17:37:18 +0000 (0:00:01.491) 0:00:16.101 ********* 2025-08-29 17:37:23.571118 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:37:23.571129 | orchestrator | changed: [testbed-manager] 2025-08-29 17:37:23.571140 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:37:23.571151 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:37:23.571161 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:37:23.571172 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:37:23.571183 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:37:23.571194 | orchestrator | 2025-08-29 17:37:23.571205 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 17:37:23.571215 | orchestrator | Friday 29 August 2025 17:37:20 +0000 (0:00:01.637) 0:00:17.738 ********* 2025-08-29 17:37:23.571226 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:37:23.571237 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:37:23.571248 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:37:23.571258 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:37:23.571275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:37:23.571286 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:37:23.571297 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:37:23.571307 | orchestrator | 2025-08-29 17:37:23.571319 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 17:37:23.571330 | orchestrator | 2025-08-29 17:37:23.571341 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 17:37:23.571352 | orchestrator | Friday 29 August 2025 17:37:20 +0000 (0:00:00.638) 0:00:18.377 ********* 2025-08-29 17:37:23.571362 | orchestrator | ok: [testbed-manager] 2025-08-29 17:37:23.571373 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:37:23.571407 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:37:23.571418 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:37:23.571429 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:37:23.571440 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:37:23.571450 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:37:23.571461 | orchestrator | 2025-08-29 17:37:23.571472 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:37:23.571484 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:37:23.571496 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:23.571507 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:23.571518 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:23.571529 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:23.571540 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:23.571551 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:23.571569 | orchestrator | 2025-08-29 17:37:23.571580 | orchestrator | 2025-08-29 17:37:23.571591 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:37:23.571602 | orchestrator | Friday 29 August 2025 17:37:23 +0000 (0:00:02.884) 0:00:21.261 ********* 2025-08-29 17:37:23.571612 | orchestrator | =============================================================================== 2025-08-29 17:37:23.571623 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2025-08-29 17:37:23.571634 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2025-08-29 17:37:23.571645 | orchestrator | Apply netplan configuration --------------------------------------------- 2.58s 2025-08-29 17:37:23.571655 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2025-08-29 17:37:23.571666 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-08-29 17:37:23.571677 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.64s 2025-08-29 17:37:23.571688 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.52s 2025-08-29 17:37:23.571699 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-08-29 17:37:23.571710 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-08-29 17:37:23.571720 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2025-08-29 17:37:23.571732 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2025-08-29 17:37:23.571749 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-08-29 17:37:24.066668 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 17:37:35.872049 | orchestrator | 2025-08-29 17:37:35 | INFO  | Task e251915d-84d8-4546-9e37-5e42c9ce9803 (reboot) was prepared for execution. 2025-08-29 17:37:35.872164 | orchestrator | 2025-08-29 17:37:35 | INFO  | It takes a moment until task e251915d-84d8-4546-9e37-5e42c9ce9803 (reboot) has been started and output is visible here. 2025-08-29 17:37:46.931969 | orchestrator | 2025-08-29 17:37:46.932083 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:37:46.932100 | orchestrator | 2025-08-29 17:37:46.932113 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:37:46.932126 | orchestrator | Friday 29 August 2025 17:37:40 +0000 (0:00:00.234) 0:00:00.234 ********* 2025-08-29 17:37:46.932138 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:37:46.932150 | orchestrator | 2025-08-29 17:37:46.932162 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:37:46.932175 | orchestrator | Friday 29 August 2025 17:37:40 +0000 (0:00:00.104) 0:00:00.338 ********* 2025-08-29 17:37:46.932186 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:37:46.932197 | orchestrator | 2025-08-29 17:37:46.932209 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:37:46.932220 | orchestrator | Friday 29 August 2025 17:37:41 +0000 (0:00:00.992) 0:00:01.330 ********* 2025-08-29 17:37:46.932232 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:37:46.932243 | orchestrator | 2025-08-29 17:37:46.932255 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:37:46.932266 | orchestrator | 2025-08-29 17:37:46.932278 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:37:46.932289 | orchestrator | Friday 29 August 2025 17:37:41 +0000 (0:00:00.113) 0:00:01.444 ********* 2025-08-29 17:37:46.932301 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:37:46.932312 | orchestrator | 2025-08-29 17:37:46.932324 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:37:46.932335 | orchestrator | Friday 29 August 2025 17:37:41 +0000 (0:00:00.099) 0:00:01.544 ********* 2025-08-29 17:37:46.932375 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:37:46.932464 | orchestrator | 2025-08-29 17:37:46.932476 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:37:46.932487 | orchestrator | Friday 29 August 2025 17:37:42 +0000 (0:00:00.711) 0:00:02.256 ********* 2025-08-29 17:37:46.932498 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:37:46.932511 | orchestrator | 2025-08-29 17:37:46.932524 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:37:46.932536 | orchestrator | 2025-08-29 17:37:46.932548 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:37:46.932560 | orchestrator | Friday 29 August 2025 17:37:42 +0000 (0:00:00.128) 0:00:02.384 ********* 2025-08-29 17:37:46.932572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:37:46.932586 | orchestrator | 2025-08-29 17:37:46.932598 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:37:46.932610 | orchestrator | Friday 29 August 2025 17:37:42 +0000 (0:00:00.261) 0:00:02.646 ********* 2025-08-29 17:37:46.932622 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:37:46.932634 | orchestrator | 2025-08-29 17:37:46.932647 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:37:46.932659 | orchestrator | Friday 29 August 2025 17:37:43 +0000 (0:00:00.718) 0:00:03.364 ********* 2025-08-29 17:37:46.932671 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:37:46.932684 | orchestrator | 2025-08-29 17:37:46.932700 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:37:46.932712 | orchestrator | 2025-08-29 17:37:46.932725 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:37:46.932737 | orchestrator | Friday 29 August 2025 17:37:43 +0000 (0:00:00.133) 0:00:03.498 ********* 2025-08-29 17:37:46.932750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:37:46.932762 | orchestrator | 2025-08-29 17:37:46.932773 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:37:46.932784 | orchestrator | Friday 29 August 2025 17:37:43 +0000 (0:00:00.123) 0:00:03.621 ********* 2025-08-29 17:37:46.932795 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:37:46.932805 | orchestrator | 2025-08-29 17:37:46.932816 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:37:46.932827 | orchestrator | Friday 29 August 2025 17:37:44 +0000 (0:00:00.668) 0:00:04.290 ********* 2025-08-29 17:37:46.932838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:37:46.932848 | orchestrator | 2025-08-29 17:37:46.932859 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:37:46.932870 | orchestrator | 2025-08-29 17:37:46.932880 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:37:46.932891 | orchestrator | Friday 29 August 2025 17:37:44 +0000 (0:00:00.122) 0:00:04.412 ********* 2025-08-29 17:37:46.932902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:37:46.932913 | orchestrator | 2025-08-29 17:37:46.932924 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:37:46.932934 | orchestrator | Friday 29 August 2025 17:37:44 +0000 (0:00:00.111) 0:00:04.524 ********* 2025-08-29 17:37:46.932945 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:37:46.932956 | orchestrator | 2025-08-29 17:37:46.932967 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:37:46.932978 | orchestrator | Friday 29 August 2025 17:37:45 +0000 (0:00:00.757) 0:00:05.281 ********* 2025-08-29 17:37:46.932989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:37:46.932999 | orchestrator | 2025-08-29 17:37:46.933010 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 17:37:46.933021 | orchestrator | 2025-08-29 17:37:46.933032 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 17:37:46.933043 | orchestrator | Friday 29 August 2025 17:37:45 +0000 (0:00:00.138) 0:00:05.419 ********* 2025-08-29 17:37:46.933053 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:37:46.933073 | orchestrator | 2025-08-29 17:37:46.933085 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 17:37:46.933095 | orchestrator | Friday 29 August 2025 17:37:45 +0000 (0:00:00.114) 0:00:05.533 ********* 2025-08-29 17:37:46.933106 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:37:46.933117 | orchestrator | 2025-08-29 17:37:46.933128 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 17:37:46.933139 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.716) 0:00:06.250 ********* 2025-08-29 17:37:46.933166 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:37:46.933178 | orchestrator | 2025-08-29 17:37:46.933189 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:37:46.933201 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:46.933230 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:46.933242 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:46.933257 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:46.933268 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:46.933279 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:37:46.933290 | orchestrator | 2025-08-29 17:37:46.933301 | orchestrator | 2025-08-29 17:37:46.933312 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:37:46.933322 | orchestrator | Friday 29 August 2025 17:37:46 +0000 (0:00:00.038) 0:00:06.289 ********* 2025-08-29 17:37:46.933333 | orchestrator | =============================================================================== 2025-08-29 17:37:46.933344 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.56s 2025-08-29 17:37:46.933354 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.82s 2025-08-29 17:37:46.933365 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-08-29 17:37:47.266146 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 17:37:59.337991 | orchestrator | 2025-08-29 17:37:59 | INFO  | Task 8218df96-be30-424f-9c71-0a14740a6fa0 (wait-for-connection) was prepared for execution. 2025-08-29 17:37:59.338162 | orchestrator | 2025-08-29 17:37:59 | INFO  | It takes a moment until task 8218df96-be30-424f-9c71-0a14740a6fa0 (wait-for-connection) has been started and output is visible here. 2025-08-29 17:38:15.934434 | orchestrator | 2025-08-29 17:38:15.934590 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 17:38:15.934612 | orchestrator | 2025-08-29 17:38:15.934654 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 17:38:15.934667 | orchestrator | Friday 29 August 2025 17:38:03 +0000 (0:00:00.293) 0:00:00.293 ********* 2025-08-29 17:38:15.934679 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:38:15.934691 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:38:15.934702 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:38:15.934713 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:38:15.934724 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:38:15.934735 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:38:15.934746 | orchestrator | 2025-08-29 17:38:15.934757 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:38:15.934769 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:38:15.934808 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:38:15.934822 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:38:15.934869 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:38:15.934882 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:38:15.934896 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:38:15.934908 | orchestrator | 2025-08-29 17:38:15.934920 | orchestrator | 2025-08-29 17:38:15.934934 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:38:15.934946 | orchestrator | Friday 29 August 2025 17:38:15 +0000 (0:00:11.713) 0:00:12.006 ********* 2025-08-29 17:38:15.934959 | orchestrator | =============================================================================== 2025-08-29 17:38:15.934971 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.71s 2025-08-29 17:38:16.302090 | orchestrator | + osism apply hddtemp 2025-08-29 17:38:28.568280 | orchestrator | 2025-08-29 17:38:28 | INFO  | Task daf4ddc7-8d60-4982-bef8-ddd8f40c4fa2 (hddtemp) was prepared for execution. 2025-08-29 17:38:28.568366 | orchestrator | 2025-08-29 17:38:28 | INFO  | It takes a moment until task daf4ddc7-8d60-4982-bef8-ddd8f40c4fa2 (hddtemp) has been started and output is visible here. 2025-08-29 17:39:00.655020 | orchestrator | 2025-08-29 17:39:00.655131 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 17:39:00.655147 | orchestrator | 2025-08-29 17:39:00.655159 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 17:39:00.655171 | orchestrator | Friday 29 August 2025 17:38:33 +0000 (0:00:00.407) 0:00:00.407 ********* 2025-08-29 17:39:00.655182 | orchestrator | ok: [testbed-manager] 2025-08-29 17:39:00.655194 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:39:00.655205 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:39:00.655216 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:39:00.655227 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:39:00.655237 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:39:00.655248 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:39:00.655259 | orchestrator | 2025-08-29 17:39:00.655270 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 17:39:00.655282 | orchestrator | Friday 29 August 2025 17:38:34 +0000 (0:00:01.005) 0:00:01.413 ********* 2025-08-29 17:39:00.655312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:39:00.655327 | orchestrator | 2025-08-29 17:39:00.655338 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 17:39:00.655349 | orchestrator | Friday 29 August 2025 17:38:35 +0000 (0:00:01.365) 0:00:02.778 ********* 2025-08-29 17:39:00.655360 | orchestrator | ok: [testbed-manager] 2025-08-29 17:39:00.655371 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:39:00.655381 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:39:00.655480 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:39:00.655500 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:39:00.655518 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:39:00.655535 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:39:00.655551 | orchestrator | 2025-08-29 17:39:00.655569 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 17:39:00.655588 | orchestrator | Friday 29 August 2025 17:38:37 +0000 (0:00:02.258) 0:00:05.037 ********* 2025-08-29 17:39:00.655637 | orchestrator | changed: [testbed-manager] 2025-08-29 17:39:00.655659 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:39:00.655677 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:39:00.655695 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:39:00.655720 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:39:00.655740 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:39:00.655758 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:39:00.655775 | orchestrator | 2025-08-29 17:39:00.655792 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 17:39:00.655809 | orchestrator | Friday 29 August 2025 17:38:39 +0000 (0:00:01.326) 0:00:06.363 ********* 2025-08-29 17:39:00.655827 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:39:00.655843 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:39:00.655860 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:39:00.655879 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:39:00.655897 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:39:00.655915 | orchestrator | ok: [testbed-manager] 2025-08-29 17:39:00.655933 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:39:00.655951 | orchestrator | 2025-08-29 17:39:00.655968 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 17:39:00.655986 | orchestrator | Friday 29 August 2025 17:38:40 +0000 (0:00:01.228) 0:00:07.592 ********* 2025-08-29 17:39:00.656003 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:39:00.656022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:39:00.656040 | orchestrator | changed: [testbed-manager] 2025-08-29 17:39:00.656059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:39:00.656079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:39:00.656099 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:39:00.656118 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:39:00.656136 | orchestrator | 2025-08-29 17:39:00.656154 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 17:39:00.656173 | orchestrator | Friday 29 August 2025 17:38:41 +0000 (0:00:00.975) 0:00:08.567 ********* 2025-08-29 17:39:00.656192 | orchestrator | changed: [testbed-manager] 2025-08-29 17:39:00.656211 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:39:00.656229 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:39:00.656248 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:39:00.656265 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:39:00.656283 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:39:00.656302 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:39:00.656319 | orchestrator | 2025-08-29 17:39:00.656338 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 17:39:00.656357 | orchestrator | Friday 29 August 2025 17:38:55 +0000 (0:00:14.207) 0:00:22.774 ********* 2025-08-29 17:39:00.656378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:39:00.656430 | orchestrator | 2025-08-29 17:39:00.656450 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 17:39:00.656470 | orchestrator | Friday 29 August 2025 17:38:57 +0000 (0:00:01.580) 0:00:24.355 ********* 2025-08-29 17:39:00.656490 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:39:00.656509 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:39:00.656528 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:39:00.656547 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:39:00.656564 | orchestrator | changed: [testbed-manager] 2025-08-29 17:39:00.656581 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:39:00.656600 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:39:00.656618 | orchestrator | 2025-08-29 17:39:00.656636 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:39:00.656656 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:39:00.656725 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:39:00.656748 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:39:00.656768 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:39:00.656786 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:39:00.656806 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:39:00.656840 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:39:00.656861 | orchestrator | 2025-08-29 17:39:00.656881 | orchestrator | 2025-08-29 17:39:00.656900 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:39:00.656919 | orchestrator | Friday 29 August 2025 17:39:00 +0000 (0:00:02.860) 0:00:27.215 ********* 2025-08-29 17:39:00.656940 | orchestrator | =============================================================================== 2025-08-29 17:39:00.656960 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.21s 2025-08-29 17:39:00.656980 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.86s 2025-08-29 17:39:00.656999 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.26s 2025-08-29 17:39:00.657016 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.58s 2025-08-29 17:39:00.657035 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.37s 2025-08-29 17:39:00.657053 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.33s 2025-08-29 17:39:00.657071 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.23s 2025-08-29 17:39:00.657089 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 1.01s 2025-08-29 17:39:00.657107 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.98s 2025-08-29 17:39:01.019253 | orchestrator | ++ semver 9.2.0 7.1.1 2025-08-29 17:39:01.088292 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 17:39:01.088426 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 17:39:15.275912 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 17:39:15.276008 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 17:39:15.276021 | orchestrator | + local max_attempts=60 2025-08-29 17:39:15.276031 | orchestrator | + local name=ceph-ansible 2025-08-29 17:39:15.276039 | orchestrator | + local attempt_num=1 2025-08-29 17:39:15.276047 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:15.311784 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:15.311869 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:15.311879 | orchestrator | + sleep 5 2025-08-29 17:39:20.317106 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:20.352688 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:20.352790 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:20.352806 | orchestrator | + sleep 5 2025-08-29 17:39:25.356367 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:25.430426 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:25.430497 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:25.430506 | orchestrator | + sleep 5 2025-08-29 17:39:30.435988 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:30.469840 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:30.470151 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:30.470168 | orchestrator | + sleep 5 2025-08-29 17:39:35.476223 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:35.519953 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:35.520023 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:35.520029 | orchestrator | + sleep 5 2025-08-29 17:39:40.524687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:40.565188 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:40.565270 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:40.565281 | orchestrator | + sleep 5 2025-08-29 17:39:45.569992 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:45.614214 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:45.614280 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:45.614293 | orchestrator | + sleep 5 2025-08-29 17:39:50.618940 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:50.667537 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:50.667630 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:50.667646 | orchestrator | + sleep 5 2025-08-29 17:39:55.674468 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:39:55.734782 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:39:55.734876 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:39:55.734889 | orchestrator | + sleep 5 2025-08-29 17:40:00.738369 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:40:00.775002 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:00.775089 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:40:00.775104 | orchestrator | + sleep 5 2025-08-29 17:40:05.780045 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:40:05.823567 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:05.823656 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:40:05.823670 | orchestrator | + sleep 5 2025-08-29 17:40:10.826978 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:40:10.856475 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:10.856526 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:40:10.856538 | orchestrator | + sleep 5 2025-08-29 17:40:15.860744 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:40:15.903557 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:15.903638 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 17:40:15.903655 | orchestrator | + sleep 5 2025-08-29 17:40:20.908038 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 17:40:20.938916 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:20.938994 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 17:40:20.939009 | orchestrator | + local max_attempts=60 2025-08-29 17:40:20.939020 | orchestrator | + local name=kolla-ansible 2025-08-29 17:40:20.939030 | orchestrator | + local attempt_num=1 2025-08-29 17:40:20.939265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 17:40:20.972645 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:20.972692 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 17:40:20.972704 | orchestrator | + local max_attempts=60 2025-08-29 17:40:20.972716 | orchestrator | + local name=osism-ansible 2025-08-29 17:40:20.972727 | orchestrator | + local attempt_num=1 2025-08-29 17:40:20.973527 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 17:40:21.011815 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 17:40:21.011894 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 17:40:21.011911 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 17:40:21.191980 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 17:40:21.368846 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 17:40:21.527052 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 17:40:21.725792 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 17:40:21.726107 | orchestrator | + osism apply gather-facts 2025-08-29 17:40:34.008008 | orchestrator | 2025-08-29 17:40:34 | INFO  | Task 9d094c8f-6618-47bb-bb8e-bb61d2ea5e29 (gather-facts) was prepared for execution. 2025-08-29 17:40:34.008125 | orchestrator | 2025-08-29 17:40:34 | INFO  | It takes a moment until task 9d094c8f-6618-47bb-bb8e-bb61d2ea5e29 (gather-facts) has been started and output is visible here. 2025-08-29 17:40:48.528006 | orchestrator | 2025-08-29 17:40:48.528078 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:40:48.528092 | orchestrator | 2025-08-29 17:40:48.528104 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:40:48.528115 | orchestrator | Friday 29 August 2025 17:40:38 +0000 (0:00:00.246) 0:00:00.246 ********* 2025-08-29 17:40:48.528126 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:40:48.528138 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:40:48.528148 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:40:48.528159 | orchestrator | ok: [testbed-manager] 2025-08-29 17:40:48.528171 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:40:48.528181 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:40:48.528192 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:40:48.528204 | orchestrator | 2025-08-29 17:40:48.528214 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 17:40:48.528225 | orchestrator | 2025-08-29 17:40:48.528236 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 17:40:48.528247 | orchestrator | Friday 29 August 2025 17:40:47 +0000 (0:00:08.811) 0:00:09.057 ********* 2025-08-29 17:40:48.528257 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:40:48.528269 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:40:48.528280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:40:48.528291 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:40:48.528301 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:40:48.528312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:40:48.528323 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:40:48.528333 | orchestrator | 2025-08-29 17:40:48.528344 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:40:48.528355 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528367 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528377 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528388 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528399 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528431 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528442 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:40:48.528453 | orchestrator | 2025-08-29 17:40:48.528464 | orchestrator | 2025-08-29 17:40:48.528475 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:40:48.528486 | orchestrator | Friday 29 August 2025 17:40:48 +0000 (0:00:00.577) 0:00:09.635 ********* 2025-08-29 17:40:48.528498 | orchestrator | =============================================================================== 2025-08-29 17:40:48.528509 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.81s 2025-08-29 17:40:48.528520 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-08-29 17:40:48.981914 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 17:40:48.999687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 17:40:49.015275 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 17:40:49.031422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 17:40:49.044144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 17:40:49.058146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 17:40:49.073416 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 17:40:49.089941 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 17:40:49.103669 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 17:40:49.121728 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 17:40:49.138731 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 17:40:49.153590 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 17:40:49.168026 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 17:40:49.184935 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 17:40:49.204533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 17:40:49.220306 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 17:40:49.235427 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 17:40:49.250522 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 17:40:49.267724 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 17:40:49.283454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 17:40:49.298747 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 17:40:49.482244 | orchestrator | ok: Runtime: 0:23:41.079679 2025-08-29 17:40:49.574763 | 2025-08-29 17:40:49.574954 | TASK [Deploy services] 2025-08-29 17:40:50.107719 | orchestrator | skipping: Conditional result was False 2025-08-29 17:40:50.125296 | 2025-08-29 17:40:50.125478 | TASK [Deploy in a nutshell] 2025-08-29 17:40:50.887166 | orchestrator | + set -e 2025-08-29 17:40:50.888847 | orchestrator | 2025-08-29 17:40:50.888925 | orchestrator | # PULL IMAGES 2025-08-29 17:40:50.888943 | orchestrator | 2025-08-29 17:40:50.888968 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 17:40:50.888989 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 17:40:50.889003 | orchestrator | ++ INTERACTIVE=false 2025-08-29 17:40:50.889046 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 17:40:50.889069 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 17:40:50.889083 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 17:40:50.889094 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 17:40:50.889111 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 17:40:50.889123 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 17:40:50.889139 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 17:40:50.889150 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 17:40:50.889167 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 17:40:50.889178 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 17:40:50.889192 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 17:40:50.889203 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 17:40:50.889214 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 17:40:50.889225 | orchestrator | ++ export ARA=false 2025-08-29 17:40:50.889236 | orchestrator | ++ ARA=false 2025-08-29 17:40:50.889246 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 17:40:50.889257 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 17:40:50.889267 | orchestrator | ++ export TEMPEST=false 2025-08-29 17:40:50.889278 | orchestrator | ++ TEMPEST=false 2025-08-29 17:40:50.889288 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 17:40:50.889298 | orchestrator | ++ IS_ZUUL=true 2025-08-29 17:40:50.889309 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:40:50.889320 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 17:40:50.889331 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 17:40:50.889341 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 17:40:50.889352 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 17:40:50.889363 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 17:40:50.889373 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 17:40:50.889384 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 17:40:50.889394 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 17:40:50.889476 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 17:40:50.889490 | orchestrator | + echo 2025-08-29 17:40:50.889501 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 17:40:50.889512 | orchestrator | + echo 2025-08-29 17:40:50.889534 | orchestrator | ++ semver 9.2.0 7.0.0 2025-08-29 17:40:50.954370 | orchestrator | + [[ 1 -ge 0 ]] 2025-08-29 17:40:50.954457 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 17:40:53.182679 | orchestrator | 2025-08-29 17:40:53 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 17:41:03.316914 | orchestrator | 2025-08-29 17:41:03 | INFO  | Task 4a0f962d-d6e7-4bfb-844e-61bfe3f4ea90 (pull-images) was prepared for execution. 2025-08-29 17:41:03.317049 | orchestrator | 2025-08-29 17:41:03 | INFO  | Task 4a0f962d-d6e7-4bfb-844e-61bfe3f4ea90 is running in background. No more output. Check ARA for logs. 2025-08-29 17:41:05.872685 | orchestrator | 2025-08-29 17:41:05 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 17:41:16.081118 | orchestrator | 2025-08-29 17:41:16 | INFO  | Task 8a4f9de4-97be-49f9-b392-9354e2ab7c2b (wipe-partitions) was prepared for execution. 2025-08-29 17:41:16.081214 | orchestrator | 2025-08-29 17:41:16 | INFO  | It takes a moment until task 8a4f9de4-97be-49f9-b392-9354e2ab7c2b (wipe-partitions) has been started and output is visible here. 2025-08-29 17:41:29.936374 | orchestrator | 2025-08-29 17:41:29.936506 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 17:41:29.936521 | orchestrator | 2025-08-29 17:41:29.936533 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 17:41:29.936549 | orchestrator | Friday 29 August 2025 17:41:20 +0000 (0:00:00.151) 0:00:00.151 ********* 2025-08-29 17:41:29.936560 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:41:29.936572 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:41:29.936583 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:41:29.936594 | orchestrator | 2025-08-29 17:41:29.936606 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 17:41:29.936645 | orchestrator | Friday 29 August 2025 17:41:21 +0000 (0:00:00.610) 0:00:00.762 ********* 2025-08-29 17:41:29.936657 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:41:29.936667 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:41:29.936678 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:41:29.936693 | orchestrator | 2025-08-29 17:41:29.936705 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 17:41:29.936715 | orchestrator | Friday 29 August 2025 17:41:21 +0000 (0:00:00.252) 0:00:01.015 ********* 2025-08-29 17:41:29.936726 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:41:29.936738 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:41:29.936748 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:41:29.936759 | orchestrator | 2025-08-29 17:41:29.936769 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 17:41:29.936780 | orchestrator | Friday 29 August 2025 17:41:22 +0000 (0:00:00.794) 0:00:01.809 ********* 2025-08-29 17:41:29.936791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:41:29.936802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:41:29.936812 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:41:29.936823 | orchestrator | 2025-08-29 17:41:29.936833 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 17:41:29.936844 | orchestrator | Friday 29 August 2025 17:41:22 +0000 (0:00:00.277) 0:00:02.086 ********* 2025-08-29 17:41:29.936855 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 17:41:29.936869 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 17:41:29.936880 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 17:41:29.936891 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 17:41:29.936902 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 17:41:29.936914 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 17:41:29.936925 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 17:41:29.936938 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 17:41:29.936950 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 17:41:29.936962 | orchestrator | 2025-08-29 17:41:29.936974 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 17:41:29.936986 | orchestrator | Friday 29 August 2025 17:41:23 +0000 (0:00:01.238) 0:00:03.325 ********* 2025-08-29 17:41:29.937001 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 17:41:29.937022 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 17:41:29.937043 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 17:41:29.937062 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 17:41:29.937081 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 17:41:29.937101 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 17:41:29.937121 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 17:41:29.937143 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 17:41:29.937165 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 17:41:29.937185 | orchestrator | 2025-08-29 17:41:29.937204 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 17:41:29.937217 | orchestrator | Friday 29 August 2025 17:41:25 +0000 (0:00:01.391) 0:00:04.717 ********* 2025-08-29 17:41:29.937229 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 17:41:29.937241 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 17:41:29.937253 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 17:41:29.937264 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 17:41:29.937275 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 17:41:29.937285 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 17:41:29.937295 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 17:41:29.937306 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 17:41:29.937334 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 17:41:29.937345 | orchestrator | 2025-08-29 17:41:29.937356 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 17:41:29.937367 | orchestrator | Friday 29 August 2025 17:41:28 +0000 (0:00:03.170) 0:00:07.887 ********* 2025-08-29 17:41:29.937377 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:41:29.937388 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:41:29.937398 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:41:29.937429 | orchestrator | 2025-08-29 17:41:29.937440 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 17:41:29.937451 | orchestrator | Friday 29 August 2025 17:41:28 +0000 (0:00:00.645) 0:00:08.533 ********* 2025-08-29 17:41:29.937462 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:41:29.937472 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:41:29.937483 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:41:29.937493 | orchestrator | 2025-08-29 17:41:29.937504 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:41:29.937516 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:29.937530 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:29.937571 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:29.937594 | orchestrator | 2025-08-29 17:41:29.937605 | orchestrator | 2025-08-29 17:41:29.937616 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:41:29.937627 | orchestrator | Friday 29 August 2025 17:41:29 +0000 (0:00:00.627) 0:00:09.161 ********* 2025-08-29 17:41:29.937637 | orchestrator | =============================================================================== 2025-08-29 17:41:29.937647 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.17s 2025-08-29 17:41:29.937658 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.39s 2025-08-29 17:41:29.937669 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2025-08-29 17:41:29.937679 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.79s 2025-08-29 17:41:29.937690 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2025-08-29 17:41:29.937700 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-08-29 17:41:29.937711 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2025-08-29 17:41:29.937721 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-08-29 17:41:29.937732 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-08-29 17:41:42.338165 | orchestrator | 2025-08-29 17:41:42 | INFO  | Task 018cd0b2-d9d2-49ab-8b2b-be6086fe87e6 (facts) was prepared for execution. 2025-08-29 17:41:42.338275 | orchestrator | 2025-08-29 17:41:42 | INFO  | It takes a moment until task 018cd0b2-d9d2-49ab-8b2b-be6086fe87e6 (facts) has been started and output is visible here. 2025-08-29 17:41:55.649090 | orchestrator | 2025-08-29 17:41:55.649198 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 17:41:55.649213 | orchestrator | 2025-08-29 17:41:55.649226 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 17:41:55.649237 | orchestrator | Friday 29 August 2025 17:41:46 +0000 (0:00:00.405) 0:00:00.405 ********* 2025-08-29 17:41:55.649248 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:41:55.649260 | orchestrator | ok: [testbed-manager] 2025-08-29 17:41:55.649271 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:41:55.649281 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:41:55.649318 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:41:55.649329 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:41:55.649339 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:41:55.649350 | orchestrator | 2025-08-29 17:41:55.649361 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 17:41:55.649371 | orchestrator | Friday 29 August 2025 17:41:48 +0000 (0:00:01.298) 0:00:01.704 ********* 2025-08-29 17:41:55.649382 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:41:55.649393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:55.649404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:55.649484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:55.649498 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:41:55.649509 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:41:55.649520 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:41:55.649530 | orchestrator | 2025-08-29 17:41:55.649541 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:41:55.649551 | orchestrator | 2025-08-29 17:41:55.649580 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:41:55.649591 | orchestrator | Friday 29 August 2025 17:41:49 +0000 (0:00:01.404) 0:00:03.109 ********* 2025-08-29 17:41:55.649602 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:41:55.649612 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:41:55.649623 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:41:55.649635 | orchestrator | ok: [testbed-manager] 2025-08-29 17:41:55.649648 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:41:55.649660 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:41:55.649672 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:41:55.649684 | orchestrator | 2025-08-29 17:41:55.649696 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 17:41:55.649708 | orchestrator | 2025-08-29 17:41:55.649720 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 17:41:55.649732 | orchestrator | Friday 29 August 2025 17:41:54 +0000 (0:00:05.171) 0:00:08.280 ********* 2025-08-29 17:41:55.649744 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:41:55.649756 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:41:55.649768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:41:55.649779 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:41:55.649791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:41:55.649803 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:41:55.649819 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:41:55.649839 | orchestrator | 2025-08-29 17:41:55.649859 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:41:55.649879 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.649903 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.649921 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.649938 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.649957 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.649976 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.649995 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:41:55.650013 | orchestrator | 2025-08-29 17:41:55.650102 | orchestrator | 2025-08-29 17:41:55.650124 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:41:55.650157 | orchestrator | Friday 29 August 2025 17:41:55 +0000 (0:00:00.559) 0:00:08.840 ********* 2025-08-29 17:41:55.650169 | orchestrator | =============================================================================== 2025-08-29 17:41:55.650179 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.17s 2025-08-29 17:41:55.650190 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.40s 2025-08-29 17:41:55.650201 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2025-08-29 17:41:55.650212 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-08-29 17:41:58.078937 | orchestrator | 2025-08-29 17:41:58 | INFO  | Task 63833b94-4cd2-426a-8124-391393c86066 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 17:41:58.079049 | orchestrator | 2025-08-29 17:41:58 | INFO  | It takes a moment until task 63833b94-4cd2-426a-8124-391393c86066 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 17:42:10.956382 | orchestrator | 2025-08-29 17:42:10.956564 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 17:42:10.956581 | orchestrator | 2025-08-29 17:42:10.956594 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:42:10.956605 | orchestrator | Friday 29 August 2025 17:42:02 +0000 (0:00:00.385) 0:00:00.385 ********* 2025-08-29 17:42:10.956618 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 17:42:10.956637 | orchestrator | 2025-08-29 17:42:10.956655 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:42:10.956673 | orchestrator | Friday 29 August 2025 17:42:03 +0000 (0:00:00.252) 0:00:00.638 ********* 2025-08-29 17:42:10.956693 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:42:10.956714 | orchestrator | 2025-08-29 17:42:10.956733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.956751 | orchestrator | Friday 29 August 2025 17:42:03 +0000 (0:00:00.307) 0:00:00.945 ********* 2025-08-29 17:42:10.956766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:42:10.956778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:42:10.956800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:42:10.956812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:42:10.956823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:42:10.956834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:42:10.956844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:42:10.956855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:42:10.956865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 17:42:10.956876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:42:10.956886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:42:10.956897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:42:10.956909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:42:10.956922 | orchestrator | 2025-08-29 17:42:10.956934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.956946 | orchestrator | Friday 29 August 2025 17:42:03 +0000 (0:00:00.400) 0:00:01.345 ********* 2025-08-29 17:42:10.956958 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.956971 | orchestrator | 2025-08-29 17:42:10.957004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957016 | orchestrator | Friday 29 August 2025 17:42:04 +0000 (0:00:00.500) 0:00:01.846 ********* 2025-08-29 17:42:10.957028 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957040 | orchestrator | 2025-08-29 17:42:10.957051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957063 | orchestrator | Friday 29 August 2025 17:42:04 +0000 (0:00:00.186) 0:00:02.032 ********* 2025-08-29 17:42:10.957075 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957086 | orchestrator | 2025-08-29 17:42:10.957099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957111 | orchestrator | Friday 29 August 2025 17:42:04 +0000 (0:00:00.208) 0:00:02.240 ********* 2025-08-29 17:42:10.957123 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957135 | orchestrator | 2025-08-29 17:42:10.957151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957163 | orchestrator | Friday 29 August 2025 17:42:04 +0000 (0:00:00.206) 0:00:02.446 ********* 2025-08-29 17:42:10.957175 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957186 | orchestrator | 2025-08-29 17:42:10.957198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957210 | orchestrator | Friday 29 August 2025 17:42:05 +0000 (0:00:00.198) 0:00:02.645 ********* 2025-08-29 17:42:10.957223 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957235 | orchestrator | 2025-08-29 17:42:10.957247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957259 | orchestrator | Friday 29 August 2025 17:42:05 +0000 (0:00:00.201) 0:00:02.846 ********* 2025-08-29 17:42:10.957271 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957281 | orchestrator | 2025-08-29 17:42:10.957292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957302 | orchestrator | Friday 29 August 2025 17:42:05 +0000 (0:00:00.218) 0:00:03.065 ********* 2025-08-29 17:42:10.957313 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957323 | orchestrator | 2025-08-29 17:42:10.957334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957344 | orchestrator | Friday 29 August 2025 17:42:05 +0000 (0:00:00.201) 0:00:03.266 ********* 2025-08-29 17:42:10.957355 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e) 2025-08-29 17:42:10.957367 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e) 2025-08-29 17:42:10.957378 | orchestrator | 2025-08-29 17:42:10.957388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957399 | orchestrator | Friday 29 August 2025 17:42:06 +0000 (0:00:00.489) 0:00:03.756 ********* 2025-08-29 17:42:10.957456 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae) 2025-08-29 17:42:10.957471 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae) 2025-08-29 17:42:10.957481 | orchestrator | 2025-08-29 17:42:10.957492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957509 | orchestrator | Friday 29 August 2025 17:42:06 +0000 (0:00:00.431) 0:00:04.187 ********* 2025-08-29 17:42:10.957520 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff) 2025-08-29 17:42:10.957531 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff) 2025-08-29 17:42:10.957542 | orchestrator | 2025-08-29 17:42:10.957552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957563 | orchestrator | Friday 29 August 2025 17:42:07 +0000 (0:00:00.658) 0:00:04.846 ********* 2025-08-29 17:42:10.957573 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217) 2025-08-29 17:42:10.957630 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217) 2025-08-29 17:42:10.957643 | orchestrator | 2025-08-29 17:42:10.957653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:10.957664 | orchestrator | Friday 29 August 2025 17:42:07 +0000 (0:00:00.674) 0:00:05.521 ********* 2025-08-29 17:42:10.957682 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:42:10.957701 | orchestrator | 2025-08-29 17:42:10.957720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.957739 | orchestrator | Friday 29 August 2025 17:42:08 +0000 (0:00:00.772) 0:00:06.293 ********* 2025-08-29 17:42:10.957759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:42:10.957777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:42:10.957795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:42:10.957808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:42:10.957819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:42:10.957829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:42:10.957840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:42:10.957850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:42:10.957861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 17:42:10.957871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:42:10.957882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:42:10.957892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:42:10.957902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:42:10.957913 | orchestrator | 2025-08-29 17:42:10.957923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.957934 | orchestrator | Friday 29 August 2025 17:42:09 +0000 (0:00:00.403) 0:00:06.697 ********* 2025-08-29 17:42:10.957945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957955 | orchestrator | 2025-08-29 17:42:10.957966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.957976 | orchestrator | Friday 29 August 2025 17:42:09 +0000 (0:00:00.202) 0:00:06.900 ********* 2025-08-29 17:42:10.957987 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.957997 | orchestrator | 2025-08-29 17:42:10.958007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.958082 | orchestrator | Friday 29 August 2025 17:42:09 +0000 (0:00:00.231) 0:00:07.132 ********* 2025-08-29 17:42:10.958094 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.958105 | orchestrator | 2025-08-29 17:42:10.958115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.958126 | orchestrator | Friday 29 August 2025 17:42:09 +0000 (0:00:00.246) 0:00:07.378 ********* 2025-08-29 17:42:10.958136 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.958147 | orchestrator | 2025-08-29 17:42:10.958157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.958168 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.223) 0:00:07.602 ********* 2025-08-29 17:42:10.958178 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.958189 | orchestrator | 2025-08-29 17:42:10.958199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.958219 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.206) 0:00:07.808 ********* 2025-08-29 17:42:10.958230 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.958240 | orchestrator | 2025-08-29 17:42:10.958251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.958261 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.248) 0:00:08.056 ********* 2025-08-29 17:42:10.958271 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:10.958282 | orchestrator | 2025-08-29 17:42:10.958292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:10.958303 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.209) 0:00:08.266 ********* 2025-08-29 17:42:10.958323 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.196842 | orchestrator | 2025-08-29 17:42:19.196941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:19.196958 | orchestrator | Friday 29 August 2025 17:42:10 +0000 (0:00:00.216) 0:00:08.483 ********* 2025-08-29 17:42:19.196968 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 17:42:19.196979 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 17:42:19.196989 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 17:42:19.196999 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 17:42:19.197008 | orchestrator | 2025-08-29 17:42:19.197019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:19.197047 | orchestrator | Friday 29 August 2025 17:42:12 +0000 (0:00:01.127) 0:00:09.611 ********* 2025-08-29 17:42:19.197057 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197066 | orchestrator | 2025-08-29 17:42:19.197076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:19.197086 | orchestrator | Friday 29 August 2025 17:42:12 +0000 (0:00:00.198) 0:00:09.809 ********* 2025-08-29 17:42:19.197095 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197104 | orchestrator | 2025-08-29 17:42:19.197114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:19.197124 | orchestrator | Friday 29 August 2025 17:42:12 +0000 (0:00:00.237) 0:00:10.047 ********* 2025-08-29 17:42:19.197133 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197142 | orchestrator | 2025-08-29 17:42:19.197152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:19.197161 | orchestrator | Friday 29 August 2025 17:42:12 +0000 (0:00:00.199) 0:00:10.247 ********* 2025-08-29 17:42:19.197171 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197180 | orchestrator | 2025-08-29 17:42:19.197190 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 17:42:19.197199 | orchestrator | Friday 29 August 2025 17:42:12 +0000 (0:00:00.218) 0:00:10.465 ********* 2025-08-29 17:42:19.197209 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 17:42:19.197218 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 17:42:19.197228 | orchestrator | 2025-08-29 17:42:19.197237 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 17:42:19.197247 | orchestrator | Friday 29 August 2025 17:42:13 +0000 (0:00:00.190) 0:00:10.656 ********* 2025-08-29 17:42:19.197256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197266 | orchestrator | 2025-08-29 17:42:19.197275 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 17:42:19.197285 | orchestrator | Friday 29 August 2025 17:42:13 +0000 (0:00:00.140) 0:00:10.796 ********* 2025-08-29 17:42:19.197294 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197303 | orchestrator | 2025-08-29 17:42:19.197313 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 17:42:19.197322 | orchestrator | Friday 29 August 2025 17:42:13 +0000 (0:00:00.151) 0:00:10.948 ********* 2025-08-29 17:42:19.197332 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197341 | orchestrator | 2025-08-29 17:42:19.197372 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 17:42:19.197382 | orchestrator | Friday 29 August 2025 17:42:13 +0000 (0:00:00.146) 0:00:11.094 ********* 2025-08-29 17:42:19.197393 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:42:19.197404 | orchestrator | 2025-08-29 17:42:19.197415 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 17:42:19.197450 | orchestrator | Friday 29 August 2025 17:42:13 +0000 (0:00:00.148) 0:00:11.242 ********* 2025-08-29 17:42:19.197462 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76bb4758-fd8e-569b-82df-4997dbff6ccd'}}) 2025-08-29 17:42:19.197472 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab048149-1b6d-515a-8df0-d9a146565eca'}}) 2025-08-29 17:42:19.197483 | orchestrator | 2025-08-29 17:42:19.197494 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 17:42:19.197504 | orchestrator | Friday 29 August 2025 17:42:13 +0000 (0:00:00.200) 0:00:11.443 ********* 2025-08-29 17:42:19.197516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76bb4758-fd8e-569b-82df-4997dbff6ccd'}})  2025-08-29 17:42:19.197534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab048149-1b6d-515a-8df0-d9a146565eca'}})  2025-08-29 17:42:19.197546 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197556 | orchestrator | 2025-08-29 17:42:19.197567 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 17:42:19.197577 | orchestrator | Friday 29 August 2025 17:42:14 +0000 (0:00:00.180) 0:00:11.623 ********* 2025-08-29 17:42:19.197588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76bb4758-fd8e-569b-82df-4997dbff6ccd'}})  2025-08-29 17:42:19.197598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab048149-1b6d-515a-8df0-d9a146565eca'}})  2025-08-29 17:42:19.197609 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197619 | orchestrator | 2025-08-29 17:42:19.197630 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 17:42:19.197641 | orchestrator | Friday 29 August 2025 17:42:14 +0000 (0:00:00.171) 0:00:11.795 ********* 2025-08-29 17:42:19.197651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76bb4758-fd8e-569b-82df-4997dbff6ccd'}})  2025-08-29 17:42:19.197661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab048149-1b6d-515a-8df0-d9a146565eca'}})  2025-08-29 17:42:19.197672 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197683 | orchestrator | 2025-08-29 17:42:19.197708 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 17:42:19.197720 | orchestrator | Friday 29 August 2025 17:42:14 +0000 (0:00:00.380) 0:00:12.175 ********* 2025-08-29 17:42:19.197731 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:42:19.197742 | orchestrator | 2025-08-29 17:42:19.197752 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 17:42:19.197762 | orchestrator | Friday 29 August 2025 17:42:14 +0000 (0:00:00.172) 0:00:12.348 ********* 2025-08-29 17:42:19.197771 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:42:19.197781 | orchestrator | 2025-08-29 17:42:19.197790 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 17:42:19.197800 | orchestrator | Friday 29 August 2025 17:42:14 +0000 (0:00:00.145) 0:00:12.493 ********* 2025-08-29 17:42:19.197809 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197818 | orchestrator | 2025-08-29 17:42:19.197828 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 17:42:19.197837 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.149) 0:00:12.643 ********* 2025-08-29 17:42:19.197846 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197856 | orchestrator | 2025-08-29 17:42:19.197865 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 17:42:19.197882 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.137) 0:00:12.781 ********* 2025-08-29 17:42:19.197892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.197901 | orchestrator | 2025-08-29 17:42:19.197911 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 17:42:19.197921 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.139) 0:00:12.920 ********* 2025-08-29 17:42:19.197930 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:42:19.197940 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:42:19.197949 | orchestrator |  "sdb": { 2025-08-29 17:42:19.197959 | orchestrator |  "osd_lvm_uuid": "76bb4758-fd8e-569b-82df-4997dbff6ccd" 2025-08-29 17:42:19.197969 | orchestrator |  }, 2025-08-29 17:42:19.197978 | orchestrator |  "sdc": { 2025-08-29 17:42:19.197988 | orchestrator |  "osd_lvm_uuid": "ab048149-1b6d-515a-8df0-d9a146565eca" 2025-08-29 17:42:19.197997 | orchestrator |  } 2025-08-29 17:42:19.198007 | orchestrator |  } 2025-08-29 17:42:19.198076 | orchestrator | } 2025-08-29 17:42:19.198088 | orchestrator | 2025-08-29 17:42:19.198097 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 17:42:19.198107 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.160) 0:00:13.081 ********* 2025-08-29 17:42:19.198116 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.198126 | orchestrator | 2025-08-29 17:42:19.198135 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 17:42:19.198145 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.147) 0:00:13.228 ********* 2025-08-29 17:42:19.198160 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.198170 | orchestrator | 2025-08-29 17:42:19.198179 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 17:42:19.198189 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.170) 0:00:13.399 ********* 2025-08-29 17:42:19.198198 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:42:19.198208 | orchestrator | 2025-08-29 17:42:19.198217 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 17:42:19.198227 | orchestrator | Friday 29 August 2025 17:42:15 +0000 (0:00:00.134) 0:00:13.533 ********* 2025-08-29 17:42:19.198236 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 17:42:19.198246 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 17:42:19.198256 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:42:19.198265 | orchestrator |  "sdb": { 2025-08-29 17:42:19.198275 | orchestrator |  "osd_lvm_uuid": "76bb4758-fd8e-569b-82df-4997dbff6ccd" 2025-08-29 17:42:19.198285 | orchestrator |  }, 2025-08-29 17:42:19.198294 | orchestrator |  "sdc": { 2025-08-29 17:42:19.198304 | orchestrator |  "osd_lvm_uuid": "ab048149-1b6d-515a-8df0-d9a146565eca" 2025-08-29 17:42:19.198313 | orchestrator |  } 2025-08-29 17:42:19.198323 | orchestrator |  }, 2025-08-29 17:42:19.198332 | orchestrator |  "lvm_volumes": [ 2025-08-29 17:42:19.198342 | orchestrator |  { 2025-08-29 17:42:19.198351 | orchestrator |  "data": "osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd", 2025-08-29 17:42:19.198361 | orchestrator |  "data_vg": "ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd" 2025-08-29 17:42:19.198370 | orchestrator |  }, 2025-08-29 17:42:19.198380 | orchestrator |  { 2025-08-29 17:42:19.198389 | orchestrator |  "data": "osd-block-ab048149-1b6d-515a-8df0-d9a146565eca", 2025-08-29 17:42:19.198399 | orchestrator |  "data_vg": "ceph-ab048149-1b6d-515a-8df0-d9a146565eca" 2025-08-29 17:42:19.198408 | orchestrator |  } 2025-08-29 17:42:19.198417 | orchestrator |  ] 2025-08-29 17:42:19.198443 | orchestrator |  } 2025-08-29 17:42:19.198452 | orchestrator | } 2025-08-29 17:42:19.198462 | orchestrator | 2025-08-29 17:42:19.198471 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 17:42:19.198481 | orchestrator | Friday 29 August 2025 17:42:16 +0000 (0:00:00.223) 0:00:13.756 ********* 2025-08-29 17:42:19.198497 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 17:42:19.198507 | orchestrator | 2025-08-29 17:42:19.198516 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 17:42:19.198526 | orchestrator | 2025-08-29 17:42:19.198535 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:42:19.198545 | orchestrator | Friday 29 August 2025 17:42:18 +0000 (0:00:02.444) 0:00:16.201 ********* 2025-08-29 17:42:19.198554 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 17:42:19.198564 | orchestrator | 2025-08-29 17:42:19.198573 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:42:19.198583 | orchestrator | Friday 29 August 2025 17:42:18 +0000 (0:00:00.264) 0:00:16.466 ********* 2025-08-29 17:42:19.198592 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:42:19.198602 | orchestrator | 2025-08-29 17:42:19.198612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:19.198628 | orchestrator | Friday 29 August 2025 17:42:19 +0000 (0:00:00.255) 0:00:16.721 ********* 2025-08-29 17:42:28.095746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:42:28.095851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:42:28.095865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:42:28.095876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:42:28.095888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:42:28.095899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:42:28.095910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:42:28.095920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:42:28.095931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 17:42:28.095942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:42:28.095972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:42:28.095984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:42:28.095995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:42:28.096006 | orchestrator | 2025-08-29 17:42:28.096023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096035 | orchestrator | Friday 29 August 2025 17:42:19 +0000 (0:00:00.396) 0:00:17.118 ********* 2025-08-29 17:42:28.096046 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096059 | orchestrator | 2025-08-29 17:42:28.096070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096081 | orchestrator | Friday 29 August 2025 17:42:19 +0000 (0:00:00.206) 0:00:17.324 ********* 2025-08-29 17:42:28.096091 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096102 | orchestrator | 2025-08-29 17:42:28.096119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096139 | orchestrator | Friday 29 August 2025 17:42:20 +0000 (0:00:00.268) 0:00:17.593 ********* 2025-08-29 17:42:28.096158 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096177 | orchestrator | 2025-08-29 17:42:28.096197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096215 | orchestrator | Friday 29 August 2025 17:42:20 +0000 (0:00:00.223) 0:00:17.816 ********* 2025-08-29 17:42:28.096234 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096251 | orchestrator | 2025-08-29 17:42:28.096323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096346 | orchestrator | Friday 29 August 2025 17:42:20 +0000 (0:00:00.216) 0:00:18.032 ********* 2025-08-29 17:42:28.096364 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096375 | orchestrator | 2025-08-29 17:42:28.096386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096397 | orchestrator | Friday 29 August 2025 17:42:20 +0000 (0:00:00.201) 0:00:18.234 ********* 2025-08-29 17:42:28.096407 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096418 | orchestrator | 2025-08-29 17:42:28.096451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096462 | orchestrator | Friday 29 August 2025 17:42:21 +0000 (0:00:00.684) 0:00:18.919 ********* 2025-08-29 17:42:28.096473 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096484 | orchestrator | 2025-08-29 17:42:28.096494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096505 | orchestrator | Friday 29 August 2025 17:42:21 +0000 (0:00:00.226) 0:00:19.146 ********* 2025-08-29 17:42:28.096515 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.096526 | orchestrator | 2025-08-29 17:42:28.096536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096547 | orchestrator | Friday 29 August 2025 17:42:21 +0000 (0:00:00.202) 0:00:19.348 ********* 2025-08-29 17:42:28.096558 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa) 2025-08-29 17:42:28.096570 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa) 2025-08-29 17:42:28.096581 | orchestrator | 2025-08-29 17:42:28.096591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096602 | orchestrator | Friday 29 August 2025 17:42:22 +0000 (0:00:00.424) 0:00:19.772 ********* 2025-08-29 17:42:28.096613 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f) 2025-08-29 17:42:28.096624 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f) 2025-08-29 17:42:28.096634 | orchestrator | 2025-08-29 17:42:28.096645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096655 | orchestrator | Friday 29 August 2025 17:42:22 +0000 (0:00:00.456) 0:00:20.228 ********* 2025-08-29 17:42:28.096666 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87) 2025-08-29 17:42:28.096677 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87) 2025-08-29 17:42:28.096688 | orchestrator | 2025-08-29 17:42:28.096698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096709 | orchestrator | Friday 29 August 2025 17:42:23 +0000 (0:00:00.603) 0:00:20.832 ********* 2025-08-29 17:42:28.096739 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e) 2025-08-29 17:42:28.096751 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e) 2025-08-29 17:42:28.096762 | orchestrator | 2025-08-29 17:42:28.096773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:28.096784 | orchestrator | Friday 29 August 2025 17:42:23 +0000 (0:00:00.494) 0:00:21.327 ********* 2025-08-29 17:42:28.096795 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:42:28.096806 | orchestrator | 2025-08-29 17:42:28.096816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.096835 | orchestrator | Friday 29 August 2025 17:42:24 +0000 (0:00:00.394) 0:00:21.721 ********* 2025-08-29 17:42:28.096847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:42:28.096857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:42:28.096877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:42:28.096888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:42:28.096899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:42:28.096910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:42:28.096920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:42:28.096931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:42:28.096941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 17:42:28.096952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:42:28.096962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:42:28.096973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:42:28.096983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:42:28.096994 | orchestrator | 2025-08-29 17:42:28.097005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097016 | orchestrator | Friday 29 August 2025 17:42:24 +0000 (0:00:00.536) 0:00:22.258 ********* 2025-08-29 17:42:28.097026 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097037 | orchestrator | 2025-08-29 17:42:28.097048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097058 | orchestrator | Friday 29 August 2025 17:42:24 +0000 (0:00:00.222) 0:00:22.481 ********* 2025-08-29 17:42:28.097069 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097079 | orchestrator | 2025-08-29 17:42:28.097090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097101 | orchestrator | Friday 29 August 2025 17:42:25 +0000 (0:00:00.808) 0:00:23.289 ********* 2025-08-29 17:42:28.097111 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097122 | orchestrator | 2025-08-29 17:42:28.097133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097143 | orchestrator | Friday 29 August 2025 17:42:25 +0000 (0:00:00.225) 0:00:23.515 ********* 2025-08-29 17:42:28.097154 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097165 | orchestrator | 2025-08-29 17:42:28.097176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097186 | orchestrator | Friday 29 August 2025 17:42:26 +0000 (0:00:00.217) 0:00:23.732 ********* 2025-08-29 17:42:28.097197 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097208 | orchestrator | 2025-08-29 17:42:28.097219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097230 | orchestrator | Friday 29 August 2025 17:42:26 +0000 (0:00:00.232) 0:00:23.964 ********* 2025-08-29 17:42:28.097240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097251 | orchestrator | 2025-08-29 17:42:28.097262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097273 | orchestrator | Friday 29 August 2025 17:42:26 +0000 (0:00:00.292) 0:00:24.256 ********* 2025-08-29 17:42:28.097283 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097294 | orchestrator | 2025-08-29 17:42:28.097305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097315 | orchestrator | Friday 29 August 2025 17:42:26 +0000 (0:00:00.216) 0:00:24.473 ********* 2025-08-29 17:42:28.097326 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097337 | orchestrator | 2025-08-29 17:42:28.097347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097365 | orchestrator | Friday 29 August 2025 17:42:27 +0000 (0:00:00.233) 0:00:24.707 ********* 2025-08-29 17:42:28.097376 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 17:42:28.097388 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 17:42:28.097399 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 17:42:28.097410 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 17:42:28.097465 | orchestrator | 2025-08-29 17:42:28.097478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:28.097489 | orchestrator | Friday 29 August 2025 17:42:27 +0000 (0:00:00.700) 0:00:25.407 ********* 2025-08-29 17:42:28.097500 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:28.097510 | orchestrator | 2025-08-29 17:42:28.097528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:35.483201 | orchestrator | Friday 29 August 2025 17:42:28 +0000 (0:00:00.211) 0:00:25.618 ********* 2025-08-29 17:42:35.483352 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483370 | orchestrator | 2025-08-29 17:42:35.483383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:35.483394 | orchestrator | Friday 29 August 2025 17:42:28 +0000 (0:00:00.183) 0:00:25.802 ********* 2025-08-29 17:42:35.483405 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483416 | orchestrator | 2025-08-29 17:42:35.483502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:35.483514 | orchestrator | Friday 29 August 2025 17:42:28 +0000 (0:00:00.202) 0:00:26.004 ********* 2025-08-29 17:42:35.483525 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483536 | orchestrator | 2025-08-29 17:42:35.483567 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 17:42:35.483579 | orchestrator | Friday 29 August 2025 17:42:28 +0000 (0:00:00.209) 0:00:26.214 ********* 2025-08-29 17:42:35.483590 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 17:42:35.483601 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 17:42:35.483612 | orchestrator | 2025-08-29 17:42:35.483623 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 17:42:35.483633 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:00.404) 0:00:26.618 ********* 2025-08-29 17:42:35.483644 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483655 | orchestrator | 2025-08-29 17:42:35.483666 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 17:42:35.483677 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:00.142) 0:00:26.761 ********* 2025-08-29 17:42:35.483689 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483700 | orchestrator | 2025-08-29 17:42:35.483712 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 17:42:35.483725 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:00.160) 0:00:26.921 ********* 2025-08-29 17:42:35.483737 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483748 | orchestrator | 2025-08-29 17:42:35.483760 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 17:42:35.483772 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:00.179) 0:00:27.100 ********* 2025-08-29 17:42:35.483785 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:42:35.483798 | orchestrator | 2025-08-29 17:42:35.483810 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 17:42:35.483822 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:00.179) 0:00:27.280 ********* 2025-08-29 17:42:35.483835 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}}) 2025-08-29 17:42:35.483848 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '90167df7-514b-5586-921e-4d7a2964fdd2'}}) 2025-08-29 17:42:35.483860 | orchestrator | 2025-08-29 17:42:35.483871 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 17:42:35.483909 | orchestrator | Friday 29 August 2025 17:42:29 +0000 (0:00:00.192) 0:00:27.473 ********* 2025-08-29 17:42:35.483922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}})  2025-08-29 17:42:35.483936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '90167df7-514b-5586-921e-4d7a2964fdd2'}})  2025-08-29 17:42:35.483948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.483960 | orchestrator | 2025-08-29 17:42:35.483972 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 17:42:35.483984 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.152) 0:00:27.626 ********* 2025-08-29 17:42:35.483997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}})  2025-08-29 17:42:35.484009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '90167df7-514b-5586-921e-4d7a2964fdd2'}})  2025-08-29 17:42:35.484021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484033 | orchestrator | 2025-08-29 17:42:35.484045 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 17:42:35.484057 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.221) 0:00:27.847 ********* 2025-08-29 17:42:35.484068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}})  2025-08-29 17:42:35.484079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '90167df7-514b-5586-921e-4d7a2964fdd2'}})  2025-08-29 17:42:35.484090 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484100 | orchestrator | 2025-08-29 17:42:35.484111 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 17:42:35.484122 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.186) 0:00:28.034 ********* 2025-08-29 17:42:35.484133 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:42:35.484143 | orchestrator | 2025-08-29 17:42:35.484154 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 17:42:35.484165 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.153) 0:00:28.187 ********* 2025-08-29 17:42:35.484175 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:42:35.484186 | orchestrator | 2025-08-29 17:42:35.484197 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 17:42:35.484207 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.141) 0:00:28.329 ********* 2025-08-29 17:42:35.484226 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484244 | orchestrator | 2025-08-29 17:42:35.484285 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 17:42:35.484306 | orchestrator | Friday 29 August 2025 17:42:30 +0000 (0:00:00.132) 0:00:28.461 ********* 2025-08-29 17:42:35.484324 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484342 | orchestrator | 2025-08-29 17:42:35.484353 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 17:42:35.484364 | orchestrator | Friday 29 August 2025 17:42:31 +0000 (0:00:00.356) 0:00:28.818 ********* 2025-08-29 17:42:35.484374 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484384 | orchestrator | 2025-08-29 17:42:35.484395 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 17:42:35.484405 | orchestrator | Friday 29 August 2025 17:42:31 +0000 (0:00:00.155) 0:00:28.974 ********* 2025-08-29 17:42:35.484416 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:42:35.484456 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:42:35.484468 | orchestrator |  "sdb": { 2025-08-29 17:42:35.484479 | orchestrator |  "osd_lvm_uuid": "7e0f67bb-93ba-55c2-b7d3-c3a17e91e129" 2025-08-29 17:42:35.484490 | orchestrator |  }, 2025-08-29 17:42:35.484500 | orchestrator |  "sdc": { 2025-08-29 17:42:35.484511 | orchestrator |  "osd_lvm_uuid": "90167df7-514b-5586-921e-4d7a2964fdd2" 2025-08-29 17:42:35.484531 | orchestrator |  } 2025-08-29 17:42:35.484542 | orchestrator |  } 2025-08-29 17:42:35.484553 | orchestrator | } 2025-08-29 17:42:35.484564 | orchestrator | 2025-08-29 17:42:35.484575 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 17:42:35.484585 | orchestrator | Friday 29 August 2025 17:42:31 +0000 (0:00:00.167) 0:00:29.142 ********* 2025-08-29 17:42:35.484596 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484606 | orchestrator | 2025-08-29 17:42:35.484624 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 17:42:35.484635 | orchestrator | Friday 29 August 2025 17:42:31 +0000 (0:00:00.204) 0:00:29.346 ********* 2025-08-29 17:42:35.484646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484656 | orchestrator | 2025-08-29 17:42:35.484667 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 17:42:35.484677 | orchestrator | Friday 29 August 2025 17:42:31 +0000 (0:00:00.159) 0:00:29.505 ********* 2025-08-29 17:42:35.484688 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:42:35.484698 | orchestrator | 2025-08-29 17:42:35.484709 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 17:42:35.484719 | orchestrator | Friday 29 August 2025 17:42:32 +0000 (0:00:00.157) 0:00:29.663 ********* 2025-08-29 17:42:35.484730 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 17:42:35.484741 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 17:42:35.484751 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:42:35.484762 | orchestrator |  "sdb": { 2025-08-29 17:42:35.484773 | orchestrator |  "osd_lvm_uuid": "7e0f67bb-93ba-55c2-b7d3-c3a17e91e129" 2025-08-29 17:42:35.484783 | orchestrator |  }, 2025-08-29 17:42:35.484799 | orchestrator |  "sdc": { 2025-08-29 17:42:35.484810 | orchestrator |  "osd_lvm_uuid": "90167df7-514b-5586-921e-4d7a2964fdd2" 2025-08-29 17:42:35.484821 | orchestrator |  } 2025-08-29 17:42:35.484831 | orchestrator |  }, 2025-08-29 17:42:35.484842 | orchestrator |  "lvm_volumes": [ 2025-08-29 17:42:35.484853 | orchestrator |  { 2025-08-29 17:42:35.484863 | orchestrator |  "data": "osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129", 2025-08-29 17:42:35.484874 | orchestrator |  "data_vg": "ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129" 2025-08-29 17:42:35.484884 | orchestrator |  }, 2025-08-29 17:42:35.484895 | orchestrator |  { 2025-08-29 17:42:35.484905 | orchestrator |  "data": "osd-block-90167df7-514b-5586-921e-4d7a2964fdd2", 2025-08-29 17:42:35.484916 | orchestrator |  "data_vg": "ceph-90167df7-514b-5586-921e-4d7a2964fdd2" 2025-08-29 17:42:35.484927 | orchestrator |  } 2025-08-29 17:42:35.484937 | orchestrator |  ] 2025-08-29 17:42:35.484948 | orchestrator |  } 2025-08-29 17:42:35.484958 | orchestrator | } 2025-08-29 17:42:35.484969 | orchestrator | 2025-08-29 17:42:35.484979 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 17:42:35.484990 | orchestrator | Friday 29 August 2025 17:42:32 +0000 (0:00:00.236) 0:00:29.900 ********* 2025-08-29 17:42:35.485000 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 17:42:35.485011 | orchestrator | 2025-08-29 17:42:35.485021 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 17:42:35.485032 | orchestrator | 2025-08-29 17:42:35.485042 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:42:35.485053 | orchestrator | Friday 29 August 2025 17:42:33 +0000 (0:00:01.351) 0:00:31.251 ********* 2025-08-29 17:42:35.485063 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 17:42:35.485074 | orchestrator | 2025-08-29 17:42:35.485085 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:42:35.485095 | orchestrator | Friday 29 August 2025 17:42:34 +0000 (0:00:00.511) 0:00:31.763 ********* 2025-08-29 17:42:35.485105 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:42:35.485122 | orchestrator | 2025-08-29 17:42:35.485133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:35.485144 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.807) 0:00:32.570 ********* 2025-08-29 17:42:35.485155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:42:35.485165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:42:35.485176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:42:35.485186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:42:35.485197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:42:35.485207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:42:35.485225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:42:45.100146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:42:45.100255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 17:42:45.100282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:42:45.100302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:42:45.100322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:42:45.100342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:42:45.100363 | orchestrator | 2025-08-29 17:42:45.100386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100406 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.436) 0:00:33.007 ********* 2025-08-29 17:42:45.100472 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100496 | orchestrator | 2025-08-29 17:42:45.100515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100533 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.269) 0:00:33.276 ********* 2025-08-29 17:42:45.100552 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100571 | orchestrator | 2025-08-29 17:42:45.100590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100609 | orchestrator | Friday 29 August 2025 17:42:35 +0000 (0:00:00.227) 0:00:33.504 ********* 2025-08-29 17:42:45.100627 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100647 | orchestrator | 2025-08-29 17:42:45.100666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100685 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.218) 0:00:33.723 ********* 2025-08-29 17:42:45.100704 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100723 | orchestrator | 2025-08-29 17:42:45.100741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100760 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.260) 0:00:33.984 ********* 2025-08-29 17:42:45.100779 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100799 | orchestrator | 2025-08-29 17:42:45.100817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100836 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.233) 0:00:34.218 ********* 2025-08-29 17:42:45.100854 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100871 | orchestrator | 2025-08-29 17:42:45.100890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.100911 | orchestrator | Friday 29 August 2025 17:42:36 +0000 (0:00:00.219) 0:00:34.437 ********* 2025-08-29 17:42:45.100929 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.100947 | orchestrator | 2025-08-29 17:42:45.100994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.101015 | orchestrator | Friday 29 August 2025 17:42:37 +0000 (0:00:00.238) 0:00:34.675 ********* 2025-08-29 17:42:45.101034 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.101052 | orchestrator | 2025-08-29 17:42:45.101088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.101108 | orchestrator | Friday 29 August 2025 17:42:37 +0000 (0:00:00.221) 0:00:34.897 ********* 2025-08-29 17:42:45.101128 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e) 2025-08-29 17:42:45.101147 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e) 2025-08-29 17:42:45.101166 | orchestrator | 2025-08-29 17:42:45.101185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.101203 | orchestrator | Friday 29 August 2025 17:42:38 +0000 (0:00:00.688) 0:00:35.586 ********* 2025-08-29 17:42:45.101222 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c) 2025-08-29 17:42:45.101240 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c) 2025-08-29 17:42:45.101258 | orchestrator | 2025-08-29 17:42:45.101277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.101295 | orchestrator | Friday 29 August 2025 17:42:38 +0000 (0:00:00.918) 0:00:36.504 ********* 2025-08-29 17:42:45.101314 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527) 2025-08-29 17:42:45.101333 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527) 2025-08-29 17:42:45.101351 | orchestrator | 2025-08-29 17:42:45.101369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.101387 | orchestrator | Friday 29 August 2025 17:42:39 +0000 (0:00:00.577) 0:00:37.081 ********* 2025-08-29 17:42:45.101406 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0) 2025-08-29 17:42:45.101456 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0) 2025-08-29 17:42:45.101477 | orchestrator | 2025-08-29 17:42:45.101494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:42:45.101513 | orchestrator | Friday 29 August 2025 17:42:40 +0000 (0:00:00.552) 0:00:37.634 ********* 2025-08-29 17:42:45.101532 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:42:45.101550 | orchestrator | 2025-08-29 17:42:45.101568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.101587 | orchestrator | Friday 29 August 2025 17:42:40 +0000 (0:00:00.673) 0:00:38.308 ********* 2025-08-29 17:42:45.101627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:42:45.101648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:42:45.101666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:42:45.101685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:42:45.101703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:42:45.101722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:42:45.101741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:42:45.101758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:42:45.101777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 17:42:45.101810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:42:45.101828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:42:45.101846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:42:45.101861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:42:45.101872 | orchestrator | 2025-08-29 17:42:45.101883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.101893 | orchestrator | Friday 29 August 2025 17:42:41 +0000 (0:00:00.431) 0:00:38.740 ********* 2025-08-29 17:42:45.101904 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.101913 | orchestrator | 2025-08-29 17:42:45.101923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.101932 | orchestrator | Friday 29 August 2025 17:42:41 +0000 (0:00:00.192) 0:00:38.932 ********* 2025-08-29 17:42:45.101942 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.101951 | orchestrator | 2025-08-29 17:42:45.101960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.101970 | orchestrator | Friday 29 August 2025 17:42:41 +0000 (0:00:00.210) 0:00:39.143 ********* 2025-08-29 17:42:45.101979 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.101988 | orchestrator | 2025-08-29 17:42:45.101998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102007 | orchestrator | Friday 29 August 2025 17:42:41 +0000 (0:00:00.186) 0:00:39.329 ********* 2025-08-29 17:42:45.102060 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102071 | orchestrator | 2025-08-29 17:42:45.102081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102090 | orchestrator | Friday 29 August 2025 17:42:41 +0000 (0:00:00.200) 0:00:39.530 ********* 2025-08-29 17:42:45.102099 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102108 | orchestrator | 2025-08-29 17:42:45.102118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102127 | orchestrator | Friday 29 August 2025 17:42:42 +0000 (0:00:00.226) 0:00:39.756 ********* 2025-08-29 17:42:45.102136 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102146 | orchestrator | 2025-08-29 17:42:45.102155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102164 | orchestrator | Friday 29 August 2025 17:42:43 +0000 (0:00:00.817) 0:00:40.573 ********* 2025-08-29 17:42:45.102174 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102185 | orchestrator | 2025-08-29 17:42:45.102201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102211 | orchestrator | Friday 29 August 2025 17:42:43 +0000 (0:00:00.224) 0:00:40.798 ********* 2025-08-29 17:42:45.102220 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102229 | orchestrator | 2025-08-29 17:42:45.102238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102248 | orchestrator | Friday 29 August 2025 17:42:43 +0000 (0:00:00.201) 0:00:41.000 ********* 2025-08-29 17:42:45.102257 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 17:42:45.102267 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 17:42:45.102276 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 17:42:45.102286 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 17:42:45.102295 | orchestrator | 2025-08-29 17:42:45.102304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102313 | orchestrator | Friday 29 August 2025 17:42:44 +0000 (0:00:00.804) 0:00:41.804 ********* 2025-08-29 17:42:45.102323 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102333 | orchestrator | 2025-08-29 17:42:45.102350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102360 | orchestrator | Friday 29 August 2025 17:42:44 +0000 (0:00:00.280) 0:00:42.085 ********* 2025-08-29 17:42:45.102376 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102386 | orchestrator | 2025-08-29 17:42:45.102395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102406 | orchestrator | Friday 29 August 2025 17:42:44 +0000 (0:00:00.213) 0:00:42.298 ********* 2025-08-29 17:42:45.102422 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102463 | orchestrator | 2025-08-29 17:42:45.102473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:42:45.102482 | orchestrator | Friday 29 August 2025 17:42:44 +0000 (0:00:00.154) 0:00:42.453 ********* 2025-08-29 17:42:45.102507 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:45.102517 | orchestrator | 2025-08-29 17:42:45.102527 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 17:42:45.102544 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:00.174) 0:00:42.628 ********* 2025-08-29 17:42:50.308391 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 17:42:50.308546 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 17:42:50.308564 | orchestrator | 2025-08-29 17:42:50.308576 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 17:42:50.308587 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:00.155) 0:00:42.783 ********* 2025-08-29 17:42:50.308598 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.308610 | orchestrator | 2025-08-29 17:42:50.308621 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 17:42:50.308632 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:00.133) 0:00:42.916 ********* 2025-08-29 17:42:50.308643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.308653 | orchestrator | 2025-08-29 17:42:50.308664 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 17:42:50.308674 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:00.119) 0:00:43.036 ********* 2025-08-29 17:42:50.308685 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.308695 | orchestrator | 2025-08-29 17:42:50.308706 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 17:42:50.308717 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:00.111) 0:00:43.147 ********* 2025-08-29 17:42:50.308727 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:42:50.308739 | orchestrator | 2025-08-29 17:42:50.308750 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 17:42:50.308760 | orchestrator | Friday 29 August 2025 17:42:45 +0000 (0:00:00.303) 0:00:43.450 ********* 2025-08-29 17:42:50.308771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b4aa328-f83b-56f5-ada4-b8257b659e12'}}) 2025-08-29 17:42:50.308784 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '756a9a3b-59dc-526e-9851-f6b5408065e4'}}) 2025-08-29 17:42:50.308795 | orchestrator | 2025-08-29 17:42:50.308805 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 17:42:50.308816 | orchestrator | Friday 29 August 2025 17:42:46 +0000 (0:00:00.201) 0:00:43.652 ********* 2025-08-29 17:42:50.308827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b4aa328-f83b-56f5-ada4-b8257b659e12'}})  2025-08-29 17:42:50.308839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '756a9a3b-59dc-526e-9851-f6b5408065e4'}})  2025-08-29 17:42:50.308850 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.308860 | orchestrator | 2025-08-29 17:42:50.308889 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 17:42:50.308902 | orchestrator | Friday 29 August 2025 17:42:46 +0000 (0:00:00.230) 0:00:43.883 ********* 2025-08-29 17:42:50.308914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b4aa328-f83b-56f5-ada4-b8257b659e12'}})  2025-08-29 17:42:50.308927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '756a9a3b-59dc-526e-9851-f6b5408065e4'}})  2025-08-29 17:42:50.308960 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.308973 | orchestrator | 2025-08-29 17:42:50.308985 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 17:42:50.308997 | orchestrator | Friday 29 August 2025 17:42:46 +0000 (0:00:00.238) 0:00:44.121 ********* 2025-08-29 17:42:50.309009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b4aa328-f83b-56f5-ada4-b8257b659e12'}})  2025-08-29 17:42:50.309021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '756a9a3b-59dc-526e-9851-f6b5408065e4'}})  2025-08-29 17:42:50.309033 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309045 | orchestrator | 2025-08-29 17:42:50.309057 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 17:42:50.309069 | orchestrator | Friday 29 August 2025 17:42:46 +0000 (0:00:00.146) 0:00:44.268 ********* 2025-08-29 17:42:50.309081 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:42:50.309093 | orchestrator | 2025-08-29 17:42:50.309106 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 17:42:50.309118 | orchestrator | Friday 29 August 2025 17:42:46 +0000 (0:00:00.179) 0:00:44.447 ********* 2025-08-29 17:42:50.309130 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:42:50.309142 | orchestrator | 2025-08-29 17:42:50.309154 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 17:42:50.309166 | orchestrator | Friday 29 August 2025 17:42:47 +0000 (0:00:00.219) 0:00:44.667 ********* 2025-08-29 17:42:50.309178 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309189 | orchestrator | 2025-08-29 17:42:50.309201 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 17:42:50.309213 | orchestrator | Friday 29 August 2025 17:42:47 +0000 (0:00:00.254) 0:00:44.922 ********* 2025-08-29 17:42:50.309225 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309237 | orchestrator | 2025-08-29 17:42:50.309249 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 17:42:50.309261 | orchestrator | Friday 29 August 2025 17:42:47 +0000 (0:00:00.167) 0:00:45.089 ********* 2025-08-29 17:42:50.309271 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309282 | orchestrator | 2025-08-29 17:42:50.309292 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 17:42:50.309303 | orchestrator | Friday 29 August 2025 17:42:47 +0000 (0:00:00.252) 0:00:45.342 ********* 2025-08-29 17:42:50.309313 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:42:50.309324 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:42:50.309335 | orchestrator |  "sdb": { 2025-08-29 17:42:50.309346 | orchestrator |  "osd_lvm_uuid": "1b4aa328-f83b-56f5-ada4-b8257b659e12" 2025-08-29 17:42:50.309374 | orchestrator |  }, 2025-08-29 17:42:50.309386 | orchestrator |  "sdc": { 2025-08-29 17:42:50.309396 | orchestrator |  "osd_lvm_uuid": "756a9a3b-59dc-526e-9851-f6b5408065e4" 2025-08-29 17:42:50.309407 | orchestrator |  } 2025-08-29 17:42:50.309418 | orchestrator |  } 2025-08-29 17:42:50.309457 | orchestrator | } 2025-08-29 17:42:50.309470 | orchestrator | 2025-08-29 17:42:50.309480 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 17:42:50.309491 | orchestrator | Friday 29 August 2025 17:42:48 +0000 (0:00:00.320) 0:00:45.662 ********* 2025-08-29 17:42:50.309502 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309512 | orchestrator | 2025-08-29 17:42:50.309523 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 17:42:50.309534 | orchestrator | Friday 29 August 2025 17:42:48 +0000 (0:00:00.176) 0:00:45.838 ********* 2025-08-29 17:42:50.309544 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309555 | orchestrator | 2025-08-29 17:42:50.309565 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 17:42:50.309587 | orchestrator | Friday 29 August 2025 17:42:48 +0000 (0:00:00.401) 0:00:46.240 ********* 2025-08-29 17:42:50.309597 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:42:50.309608 | orchestrator | 2025-08-29 17:42:50.309618 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 17:42:50.309629 | orchestrator | Friday 29 August 2025 17:42:48 +0000 (0:00:00.169) 0:00:46.409 ********* 2025-08-29 17:42:50.309639 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 17:42:50.309650 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 17:42:50.309661 | orchestrator |  "ceph_osd_devices": { 2025-08-29 17:42:50.309672 | orchestrator |  "sdb": { 2025-08-29 17:42:50.309682 | orchestrator |  "osd_lvm_uuid": "1b4aa328-f83b-56f5-ada4-b8257b659e12" 2025-08-29 17:42:50.309693 | orchestrator |  }, 2025-08-29 17:42:50.309704 | orchestrator |  "sdc": { 2025-08-29 17:42:50.309714 | orchestrator |  "osd_lvm_uuid": "756a9a3b-59dc-526e-9851-f6b5408065e4" 2025-08-29 17:42:50.309725 | orchestrator |  } 2025-08-29 17:42:50.309735 | orchestrator |  }, 2025-08-29 17:42:50.309746 | orchestrator |  "lvm_volumes": [ 2025-08-29 17:42:50.309757 | orchestrator |  { 2025-08-29 17:42:50.309767 | orchestrator |  "data": "osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12", 2025-08-29 17:42:50.309777 | orchestrator |  "data_vg": "ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12" 2025-08-29 17:42:50.309788 | orchestrator |  }, 2025-08-29 17:42:50.309799 | orchestrator |  { 2025-08-29 17:42:50.309809 | orchestrator |  "data": "osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4", 2025-08-29 17:42:50.309820 | orchestrator |  "data_vg": "ceph-756a9a3b-59dc-526e-9851-f6b5408065e4" 2025-08-29 17:42:50.309831 | orchestrator |  } 2025-08-29 17:42:50.309841 | orchestrator |  ] 2025-08-29 17:42:50.309852 | orchestrator |  } 2025-08-29 17:42:50.309863 | orchestrator | } 2025-08-29 17:42:50.309878 | orchestrator | 2025-08-29 17:42:50.309889 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 17:42:50.309900 | orchestrator | Friday 29 August 2025 17:42:49 +0000 (0:00:00.225) 0:00:46.635 ********* 2025-08-29 17:42:50.309910 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 17:42:50.309921 | orchestrator | 2025-08-29 17:42:50.309931 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:42:50.309950 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:42:50.309963 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:42:50.309974 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 17:42:50.309985 | orchestrator | 2025-08-29 17:42:50.309996 | orchestrator | 2025-08-29 17:42:50.310006 | orchestrator | 2025-08-29 17:42:50.310090 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:42:50.310111 | orchestrator | Friday 29 August 2025 17:42:50 +0000 (0:00:01.176) 0:00:47.812 ********* 2025-08-29 17:42:50.310131 | orchestrator | =============================================================================== 2025-08-29 17:42:50.310149 | orchestrator | Write configuration file ------------------------------------------------ 4.97s 2025-08-29 17:42:50.310168 | orchestrator | Add known partitions to the list of available block devices ------------- 1.37s 2025-08-29 17:42:50.310179 | orchestrator | Get initial list of available block devices ----------------------------- 1.37s 2025-08-29 17:42:50.310190 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-08-29 17:42:50.310201 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-08-29 17:42:50.310211 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.03s 2025-08-29 17:42:50.310231 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2025-08-29 17:42:50.310241 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-08-29 17:42:50.310252 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-08-29 17:42:50.310262 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-08-29 17:42:50.310273 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2025-08-29 17:42:50.310283 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.75s 2025-08-29 17:42:50.310293 | orchestrator | Print DB devices -------------------------------------------------------- 0.73s 2025-08-29 17:42:50.310304 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.71s 2025-08-29 17:42:50.310324 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-08-29 17:42:50.781386 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-08-29 17:42:50.781497 | orchestrator | Print configuration data ------------------------------------------------ 0.69s 2025-08-29 17:42:50.781509 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-08-29 17:42:50.781517 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-08-29 17:42:50.781526 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-08-29 17:43:14.158194 | orchestrator | 2025-08-29 17:43:14 | INFO  | Task cb1889d5-d8c0-4da2-b9e4-65827df97d9e (sync inventory) is running in background. Output coming soon. 2025-08-29 17:43:34.468920 | orchestrator | 2025-08-29 17:43:15 | INFO  | Starting group_vars file reorganization 2025-08-29 17:43:34.469036 | orchestrator | 2025-08-29 17:43:15 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 17:43:34.469052 | orchestrator | 2025-08-29 17:43:15 | INFO  | Group_vars file reorganization completed 2025-08-29 17:43:34.469065 | orchestrator | 2025-08-29 17:43:18 | INFO  | Starting variable preparation from inventory 2025-08-29 17:43:34.469076 | orchestrator | 2025-08-29 17:43:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 17:43:34.469087 | orchestrator | 2025-08-29 17:43:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 17:43:34.469098 | orchestrator | 2025-08-29 17:43:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 17:43:34.469109 | orchestrator | 2025-08-29 17:43:19 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 17:43:34.469119 | orchestrator | 2025-08-29 17:43:19 | INFO  | Variable preparation completed 2025-08-29 17:43:34.469130 | orchestrator | 2025-08-29 17:43:21 | INFO  | Starting inventory overwrite handling 2025-08-29 17:43:34.469141 | orchestrator | 2025-08-29 17:43:21 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 17:43:34.469152 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removing group frr:children from 60-generic 2025-08-29 17:43:34.469163 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 17:43:34.469174 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 17:43:34.469185 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 17:43:34.469196 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 17:43:34.469207 | orchestrator | 2025-08-29 17:43:21 | INFO  | Handling group overwrites in 20-roles 2025-08-29 17:43:34.469220 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 17:43:34.469271 | orchestrator | 2025-08-29 17:43:21 | INFO  | Removed 6 group(s) in total 2025-08-29 17:43:34.469291 | orchestrator | 2025-08-29 17:43:21 | INFO  | Inventory overwrite handling completed 2025-08-29 17:43:34.469310 | orchestrator | 2025-08-29 17:43:22 | INFO  | Starting merge of inventory files 2025-08-29 17:43:34.469332 | orchestrator | 2025-08-29 17:43:22 | INFO  | Inventory files merged successfully 2025-08-29 17:43:34.469354 | orchestrator | 2025-08-29 17:43:26 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 17:43:34.469372 | orchestrator | 2025-08-29 17:43:33 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 17:43:34.469384 | orchestrator | [master 035a1cc] 2025-08-29-17-43 2025-08-29 17:43:34.469397 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 17:43:36.571037 | orchestrator | 2025-08-29 17:43:36 | INFO  | Task 7ce6044f-70ad-4424-910a-0b4f280e8823 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 17:43:36.571159 | orchestrator | 2025-08-29 17:43:36 | INFO  | It takes a moment until task 7ce6044f-70ad-4424-910a-0b4f280e8823 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 17:43:48.631481 | orchestrator | 2025-08-29 17:43:48.631549 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 17:43:48.631560 | orchestrator | 2025-08-29 17:43:48.631578 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:43:48.631592 | orchestrator | Friday 29 August 2025 17:43:40 +0000 (0:00:00.363) 0:00:00.363 ********* 2025-08-29 17:43:48.631600 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 17:43:48.631607 | orchestrator | 2025-08-29 17:43:48.631615 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:43:48.631622 | orchestrator | Friday 29 August 2025 17:43:41 +0000 (0:00:00.338) 0:00:00.702 ********* 2025-08-29 17:43:48.631629 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:43:48.631637 | orchestrator | 2025-08-29 17:43:48.631645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631652 | orchestrator | Friday 29 August 2025 17:43:41 +0000 (0:00:00.273) 0:00:00.975 ********* 2025-08-29 17:43:48.631659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:43:48.631667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:43:48.631674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:43:48.631682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:43:48.631689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:43:48.631697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:43:48.631704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:43:48.631711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:43:48.631718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 17:43:48.631725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:43:48.631733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:43:48.631740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:43:48.631747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:43:48.631754 | orchestrator | 2025-08-29 17:43:48.631762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631783 | orchestrator | Friday 29 August 2025 17:43:41 +0000 (0:00:00.477) 0:00:01.452 ********* 2025-08-29 17:43:48.631791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631799 | orchestrator | 2025-08-29 17:43:48.631806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631825 | orchestrator | Friday 29 August 2025 17:43:42 +0000 (0:00:00.398) 0:00:01.851 ********* 2025-08-29 17:43:48.631832 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631839 | orchestrator | 2025-08-29 17:43:48.631846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631854 | orchestrator | Friday 29 August 2025 17:43:42 +0000 (0:00:00.201) 0:00:02.053 ********* 2025-08-29 17:43:48.631861 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631868 | orchestrator | 2025-08-29 17:43:48.631878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631885 | orchestrator | Friday 29 August 2025 17:43:42 +0000 (0:00:00.191) 0:00:02.244 ********* 2025-08-29 17:43:48.631892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631899 | orchestrator | 2025-08-29 17:43:48.631907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631914 | orchestrator | Friday 29 August 2025 17:43:42 +0000 (0:00:00.238) 0:00:02.483 ********* 2025-08-29 17:43:48.631921 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631928 | orchestrator | 2025-08-29 17:43:48.631935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631942 | orchestrator | Friday 29 August 2025 17:43:43 +0000 (0:00:00.222) 0:00:02.705 ********* 2025-08-29 17:43:48.631949 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631956 | orchestrator | 2025-08-29 17:43:48.631964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631971 | orchestrator | Friday 29 August 2025 17:43:43 +0000 (0:00:00.246) 0:00:02.952 ********* 2025-08-29 17:43:48.631978 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.631985 | orchestrator | 2025-08-29 17:43:48.631992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.631999 | orchestrator | Friday 29 August 2025 17:43:43 +0000 (0:00:00.207) 0:00:03.159 ********* 2025-08-29 17:43:48.632006 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632014 | orchestrator | 2025-08-29 17:43:48.632021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.632028 | orchestrator | Friday 29 August 2025 17:43:43 +0000 (0:00:00.180) 0:00:03.340 ********* 2025-08-29 17:43:48.632035 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e) 2025-08-29 17:43:48.632043 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e) 2025-08-29 17:43:48.632051 | orchestrator | 2025-08-29 17:43:48.632058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.632065 | orchestrator | Friday 29 August 2025 17:43:44 +0000 (0:00:00.451) 0:00:03.791 ********* 2025-08-29 17:43:48.632083 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae) 2025-08-29 17:43:48.632091 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae) 2025-08-29 17:43:48.632098 | orchestrator | 2025-08-29 17:43:48.632105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.632112 | orchestrator | Friday 29 August 2025 17:43:44 +0000 (0:00:00.403) 0:00:04.195 ********* 2025-08-29 17:43:48.632119 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff) 2025-08-29 17:43:48.632127 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff) 2025-08-29 17:43:48.632134 | orchestrator | 2025-08-29 17:43:48.632141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.632153 | orchestrator | Friday 29 August 2025 17:43:45 +0000 (0:00:00.575) 0:00:04.770 ********* 2025-08-29 17:43:48.632160 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217) 2025-08-29 17:43:48.632167 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217) 2025-08-29 17:43:48.632175 | orchestrator | 2025-08-29 17:43:48.632182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:43:48.632189 | orchestrator | Friday 29 August 2025 17:43:45 +0000 (0:00:00.714) 0:00:05.485 ********* 2025-08-29 17:43:48.632196 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:43:48.632203 | orchestrator | 2025-08-29 17:43:48.632210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632218 | orchestrator | Friday 29 August 2025 17:43:46 +0000 (0:00:00.661) 0:00:06.146 ********* 2025-08-29 17:43:48.632225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 17:43:48.632232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 17:43:48.632239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 17:43:48.632246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 17:43:48.632253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 17:43:48.632260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 17:43:48.632267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 17:43:48.632274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 17:43:48.632282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 17:43:48.632289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 17:43:48.632296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 17:43:48.632303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 17:43:48.632310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 17:43:48.632317 | orchestrator | 2025-08-29 17:43:48.632324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632331 | orchestrator | Friday 29 August 2025 17:43:46 +0000 (0:00:00.414) 0:00:06.561 ********* 2025-08-29 17:43:48.632338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632346 | orchestrator | 2025-08-29 17:43:48.632353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632360 | orchestrator | Friday 29 August 2025 17:43:47 +0000 (0:00:00.264) 0:00:06.825 ********* 2025-08-29 17:43:48.632367 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632374 | orchestrator | 2025-08-29 17:43:48.632381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632388 | orchestrator | Friday 29 August 2025 17:43:47 +0000 (0:00:00.204) 0:00:07.030 ********* 2025-08-29 17:43:48.632395 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632402 | orchestrator | 2025-08-29 17:43:48.632409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632416 | orchestrator | Friday 29 August 2025 17:43:47 +0000 (0:00:00.190) 0:00:07.220 ********* 2025-08-29 17:43:48.632423 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632430 | orchestrator | 2025-08-29 17:43:48.632451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632459 | orchestrator | Friday 29 August 2025 17:43:47 +0000 (0:00:00.203) 0:00:07.424 ********* 2025-08-29 17:43:48.632470 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632477 | orchestrator | 2025-08-29 17:43:48.632484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632491 | orchestrator | Friday 29 August 2025 17:43:47 +0000 (0:00:00.223) 0:00:07.647 ********* 2025-08-29 17:43:48.632498 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632505 | orchestrator | 2025-08-29 17:43:48.632512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632519 | orchestrator | Friday 29 August 2025 17:43:48 +0000 (0:00:00.201) 0:00:07.848 ********* 2025-08-29 17:43:48.632526 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:48.632533 | orchestrator | 2025-08-29 17:43:48.632540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:48.632547 | orchestrator | Friday 29 August 2025 17:43:48 +0000 (0:00:00.247) 0:00:08.096 ********* 2025-08-29 17:43:48.632558 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475061 | orchestrator | 2025-08-29 17:43:57.475156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:57.475173 | orchestrator | Friday 29 August 2025 17:43:48 +0000 (0:00:00.219) 0:00:08.315 ********* 2025-08-29 17:43:57.475186 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 17:43:57.475198 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 17:43:57.475209 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 17:43:57.475219 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 17:43:57.475230 | orchestrator | 2025-08-29 17:43:57.475242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:57.475252 | orchestrator | Friday 29 August 2025 17:43:49 +0000 (0:00:00.984) 0:00:09.299 ********* 2025-08-29 17:43:57.475263 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475274 | orchestrator | 2025-08-29 17:43:57.475285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:57.475296 | orchestrator | Friday 29 August 2025 17:43:49 +0000 (0:00:00.220) 0:00:09.520 ********* 2025-08-29 17:43:57.475306 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475317 | orchestrator | 2025-08-29 17:43:57.475328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:57.475339 | orchestrator | Friday 29 August 2025 17:43:50 +0000 (0:00:00.260) 0:00:09.780 ********* 2025-08-29 17:43:57.475350 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475361 | orchestrator | 2025-08-29 17:43:57.475371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:43:57.475383 | orchestrator | Friday 29 August 2025 17:43:50 +0000 (0:00:00.196) 0:00:09.977 ********* 2025-08-29 17:43:57.475394 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475405 | orchestrator | 2025-08-29 17:43:57.475416 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 17:43:57.475426 | orchestrator | Friday 29 August 2025 17:43:50 +0000 (0:00:00.209) 0:00:10.186 ********* 2025-08-29 17:43:57.475437 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475499 | orchestrator | 2025-08-29 17:43:57.475511 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 17:43:57.475521 | orchestrator | Friday 29 August 2025 17:43:50 +0000 (0:00:00.122) 0:00:10.309 ********* 2025-08-29 17:43:57.475533 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '76bb4758-fd8e-569b-82df-4997dbff6ccd'}}) 2025-08-29 17:43:57.475544 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab048149-1b6d-515a-8df0-d9a146565eca'}}) 2025-08-29 17:43:57.475555 | orchestrator | 2025-08-29 17:43:57.475566 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 17:43:57.475576 | orchestrator | Friday 29 August 2025 17:43:50 +0000 (0:00:00.178) 0:00:10.487 ********* 2025-08-29 17:43:57.475588 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'}) 2025-08-29 17:43:57.475621 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'}) 2025-08-29 17:43:57.475634 | orchestrator | 2025-08-29 17:43:57.475661 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 17:43:57.475674 | orchestrator | Friday 29 August 2025 17:43:53 +0000 (0:00:02.282) 0:00:12.770 ********* 2025-08-29 17:43:57.475692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.475704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.475714 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475725 | orchestrator | 2025-08-29 17:43:57.475736 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 17:43:57.475746 | orchestrator | Friday 29 August 2025 17:43:53 +0000 (0:00:00.177) 0:00:12.947 ********* 2025-08-29 17:43:57.475757 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'}) 2025-08-29 17:43:57.475768 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'}) 2025-08-29 17:43:57.475778 | orchestrator | 2025-08-29 17:43:57.475789 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 17:43:57.475799 | orchestrator | Friday 29 August 2025 17:43:54 +0000 (0:00:01.589) 0:00:14.537 ********* 2025-08-29 17:43:57.475811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.475830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.475849 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475867 | orchestrator | 2025-08-29 17:43:57.475886 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 17:43:57.475904 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:00.197) 0:00:14.735 ********* 2025-08-29 17:43:57.475923 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.475941 | orchestrator | 2025-08-29 17:43:57.475960 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 17:43:57.476004 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:00.149) 0:00:14.885 ********* 2025-08-29 17:43:57.476019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.476030 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.476040 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476051 | orchestrator | 2025-08-29 17:43:57.476062 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 17:43:57.476072 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:00.413) 0:00:15.298 ********* 2025-08-29 17:43:57.476083 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476093 | orchestrator | 2025-08-29 17:43:57.476104 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 17:43:57.476114 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:00.172) 0:00:15.470 ********* 2025-08-29 17:43:57.476125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.476146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.476156 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476167 | orchestrator | 2025-08-29 17:43:57.476178 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 17:43:57.476188 | orchestrator | Friday 29 August 2025 17:43:55 +0000 (0:00:00.172) 0:00:15.643 ********* 2025-08-29 17:43:57.476198 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476209 | orchestrator | 2025-08-29 17:43:57.476219 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 17:43:57.476230 | orchestrator | Friday 29 August 2025 17:43:56 +0000 (0:00:00.186) 0:00:15.829 ********* 2025-08-29 17:43:57.476240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.476251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.476262 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476272 | orchestrator | 2025-08-29 17:43:57.476283 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 17:43:57.476293 | orchestrator | Friday 29 August 2025 17:43:56 +0000 (0:00:00.205) 0:00:16.035 ********* 2025-08-29 17:43:57.476304 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:43:57.476314 | orchestrator | 2025-08-29 17:43:57.476325 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 17:43:57.476336 | orchestrator | Friday 29 August 2025 17:43:56 +0000 (0:00:00.195) 0:00:16.231 ********* 2025-08-29 17:43:57.476346 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.476362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.476373 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476384 | orchestrator | 2025-08-29 17:43:57.476395 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 17:43:57.476405 | orchestrator | Friday 29 August 2025 17:43:56 +0000 (0:00:00.189) 0:00:16.420 ********* 2025-08-29 17:43:57.476416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.476426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.476437 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476468 | orchestrator | 2025-08-29 17:43:57.476479 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 17:43:57.476490 | orchestrator | Friday 29 August 2025 17:43:56 +0000 (0:00:00.225) 0:00:16.645 ********* 2025-08-29 17:43:57.476500 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:43:57.476511 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:43:57.476522 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476532 | orchestrator | 2025-08-29 17:43:57.476543 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 17:43:57.476553 | orchestrator | Friday 29 August 2025 17:43:57 +0000 (0:00:00.199) 0:00:16.844 ********* 2025-08-29 17:43:57.476564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476574 | orchestrator | 2025-08-29 17:43:57.476585 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 17:43:57.476602 | orchestrator | Friday 29 August 2025 17:43:57 +0000 (0:00:00.160) 0:00:17.005 ********* 2025-08-29 17:43:57.476613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:43:57.476626 | orchestrator | 2025-08-29 17:43:57.476653 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 17:44:05.273857 | orchestrator | Friday 29 August 2025 17:43:57 +0000 (0:00:00.153) 0:00:17.159 ********* 2025-08-29 17:44:05.273957 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.273974 | orchestrator | 2025-08-29 17:44:05.273987 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 17:44:05.273999 | orchestrator | Friday 29 August 2025 17:43:57 +0000 (0:00:00.190) 0:00:17.349 ********* 2025-08-29 17:44:05.274010 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:44:05.274077 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 17:44:05.274089 | orchestrator | } 2025-08-29 17:44:05.274101 | orchestrator | 2025-08-29 17:44:05.274120 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 17:44:05.274141 | orchestrator | Friday 29 August 2025 17:43:58 +0000 (0:00:00.411) 0:00:17.760 ********* 2025-08-29 17:44:05.274160 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:44:05.274181 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 17:44:05.274202 | orchestrator | } 2025-08-29 17:44:05.274223 | orchestrator | 2025-08-29 17:44:05.274246 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 17:44:05.274268 | orchestrator | Friday 29 August 2025 17:43:58 +0000 (0:00:00.171) 0:00:17.932 ********* 2025-08-29 17:44:05.274288 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:44:05.274309 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 17:44:05.274330 | orchestrator | } 2025-08-29 17:44:05.274351 | orchestrator | 2025-08-29 17:44:05.274372 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 17:44:05.274393 | orchestrator | Friday 29 August 2025 17:43:58 +0000 (0:00:00.169) 0:00:18.102 ********* 2025-08-29 17:44:05.274416 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:05.274439 | orchestrator | 2025-08-29 17:44:05.274484 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 17:44:05.274504 | orchestrator | Friday 29 August 2025 17:43:59 +0000 (0:00:00.719) 0:00:18.821 ********* 2025-08-29 17:44:05.274523 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:05.274542 | orchestrator | 2025-08-29 17:44:05.274562 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 17:44:05.274582 | orchestrator | Friday 29 August 2025 17:43:59 +0000 (0:00:00.544) 0:00:19.365 ********* 2025-08-29 17:44:05.274601 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:05.274621 | orchestrator | 2025-08-29 17:44:05.274632 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 17:44:05.274643 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.592) 0:00:19.958 ********* 2025-08-29 17:44:05.274654 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:05.274664 | orchestrator | 2025-08-29 17:44:05.274675 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 17:44:05.274686 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.184) 0:00:20.142 ********* 2025-08-29 17:44:05.274696 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.274707 | orchestrator | 2025-08-29 17:44:05.274718 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 17:44:05.274728 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.142) 0:00:20.284 ********* 2025-08-29 17:44:05.274739 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.274749 | orchestrator | 2025-08-29 17:44:05.274760 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 17:44:05.274771 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.135) 0:00:20.420 ********* 2025-08-29 17:44:05.274781 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:44:05.274815 | orchestrator |  "vgs_report": { 2025-08-29 17:44:05.274826 | orchestrator |  "vg": [] 2025-08-29 17:44:05.274837 | orchestrator |  } 2025-08-29 17:44:05.274848 | orchestrator | } 2025-08-29 17:44:05.274858 | orchestrator | 2025-08-29 17:44:05.274869 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 17:44:05.274880 | orchestrator | Friday 29 August 2025 17:44:00 +0000 (0:00:00.197) 0:00:20.618 ********* 2025-08-29 17:44:05.274891 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.274901 | orchestrator | 2025-08-29 17:44:05.274912 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 17:44:05.274923 | orchestrator | Friday 29 August 2025 17:44:01 +0000 (0:00:00.167) 0:00:20.786 ********* 2025-08-29 17:44:05.274933 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.274944 | orchestrator | 2025-08-29 17:44:05.274954 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 17:44:05.274965 | orchestrator | Friday 29 August 2025 17:44:01 +0000 (0:00:00.215) 0:00:21.001 ********* 2025-08-29 17:44:05.274976 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.274986 | orchestrator | 2025-08-29 17:44:05.274997 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 17:44:05.275007 | orchestrator | Friday 29 August 2025 17:44:01 +0000 (0:00:00.425) 0:00:21.427 ********* 2025-08-29 17:44:05.275018 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275028 | orchestrator | 2025-08-29 17:44:05.275039 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 17:44:05.275050 | orchestrator | Friday 29 August 2025 17:44:01 +0000 (0:00:00.172) 0:00:21.600 ********* 2025-08-29 17:44:05.275060 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275071 | orchestrator | 2025-08-29 17:44:05.275097 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 17:44:05.275108 | orchestrator | Friday 29 August 2025 17:44:02 +0000 (0:00:00.204) 0:00:21.804 ********* 2025-08-29 17:44:05.275119 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275130 | orchestrator | 2025-08-29 17:44:05.275149 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 17:44:05.275167 | orchestrator | Friday 29 August 2025 17:44:02 +0000 (0:00:00.175) 0:00:21.980 ********* 2025-08-29 17:44:05.275185 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275202 | orchestrator | 2025-08-29 17:44:05.275220 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 17:44:05.275238 | orchestrator | Friday 29 August 2025 17:44:02 +0000 (0:00:00.260) 0:00:22.240 ********* 2025-08-29 17:44:05.275258 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275277 | orchestrator | 2025-08-29 17:44:05.275296 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 17:44:05.275327 | orchestrator | Friday 29 August 2025 17:44:02 +0000 (0:00:00.174) 0:00:22.415 ********* 2025-08-29 17:44:05.275338 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275349 | orchestrator | 2025-08-29 17:44:05.275359 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 17:44:05.275370 | orchestrator | Friday 29 August 2025 17:44:02 +0000 (0:00:00.171) 0:00:22.586 ********* 2025-08-29 17:44:05.275380 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275391 | orchestrator | 2025-08-29 17:44:05.275401 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 17:44:05.275412 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:00.162) 0:00:22.749 ********* 2025-08-29 17:44:05.275422 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275433 | orchestrator | 2025-08-29 17:44:05.275465 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 17:44:05.275477 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:00.168) 0:00:22.917 ********* 2025-08-29 17:44:05.275488 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275498 | orchestrator | 2025-08-29 17:44:05.275509 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 17:44:05.275569 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:00.166) 0:00:23.084 ********* 2025-08-29 17:44:05.275581 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275592 | orchestrator | 2025-08-29 17:44:05.275603 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 17:44:05.275614 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:00.191) 0:00:23.275 ********* 2025-08-29 17:44:05.275625 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275635 | orchestrator | 2025-08-29 17:44:05.275646 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 17:44:05.275657 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:00.154) 0:00:23.429 ********* 2025-08-29 17:44:05.275669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:05.275681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:05.275691 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275702 | orchestrator | 2025-08-29 17:44:05.275713 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 17:44:05.275724 | orchestrator | Friday 29 August 2025 17:44:03 +0000 (0:00:00.225) 0:00:23.655 ********* 2025-08-29 17:44:05.275734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:05.275745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:05.275756 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275767 | orchestrator | 2025-08-29 17:44:05.275778 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 17:44:05.275789 | orchestrator | Friday 29 August 2025 17:44:04 +0000 (0:00:00.482) 0:00:24.138 ********* 2025-08-29 17:44:05.275806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:05.275817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:05.275827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275838 | orchestrator | 2025-08-29 17:44:05.275849 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 17:44:05.275859 | orchestrator | Friday 29 August 2025 17:44:04 +0000 (0:00:00.218) 0:00:24.356 ********* 2025-08-29 17:44:05.275870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:05.275881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:05.275892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275902 | orchestrator | 2025-08-29 17:44:05.275913 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 17:44:05.275924 | orchestrator | Friday 29 August 2025 17:44:04 +0000 (0:00:00.186) 0:00:24.543 ********* 2025-08-29 17:44:05.275935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:05.275945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:05.275956 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:05.275967 | orchestrator | 2025-08-29 17:44:05.275977 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 17:44:05.275995 | orchestrator | Friday 29 August 2025 17:44:05 +0000 (0:00:00.177) 0:00:24.720 ********* 2025-08-29 17:44:05.276006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:05.276024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:11.503313 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:11.503384 | orchestrator | 2025-08-29 17:44:11.503399 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 17:44:11.503411 | orchestrator | Friday 29 August 2025 17:44:05 +0000 (0:00:00.238) 0:00:24.958 ********* 2025-08-29 17:44:11.503422 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:11.503434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:11.503471 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:11.503493 | orchestrator | 2025-08-29 17:44:11.503513 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 17:44:11.503532 | orchestrator | Friday 29 August 2025 17:44:05 +0000 (0:00:00.247) 0:00:25.206 ********* 2025-08-29 17:44:11.503547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:11.503558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:11.503569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:11.503579 | orchestrator | 2025-08-29 17:44:11.503591 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 17:44:11.503601 | orchestrator | Friday 29 August 2025 17:44:05 +0000 (0:00:00.160) 0:00:25.366 ********* 2025-08-29 17:44:11.503612 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:11.503624 | orchestrator | 2025-08-29 17:44:11.503635 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 17:44:11.503645 | orchestrator | Friday 29 August 2025 17:44:06 +0000 (0:00:00.576) 0:00:25.942 ********* 2025-08-29 17:44:11.503656 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:11.503667 | orchestrator | 2025-08-29 17:44:11.503677 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 17:44:11.503688 | orchestrator | Friday 29 August 2025 17:44:06 +0000 (0:00:00.538) 0:00:26.480 ********* 2025-08-29 17:44:11.503699 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:44:11.503709 | orchestrator | 2025-08-29 17:44:11.503720 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 17:44:11.503731 | orchestrator | Friday 29 August 2025 17:44:06 +0000 (0:00:00.161) 0:00:26.642 ********* 2025-08-29 17:44:11.503742 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'vg_name': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'}) 2025-08-29 17:44:11.503753 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'vg_name': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'}) 2025-08-29 17:44:11.503764 | orchestrator | 2025-08-29 17:44:11.503775 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 17:44:11.503786 | orchestrator | Friday 29 August 2025 17:44:07 +0000 (0:00:00.192) 0:00:26.835 ********* 2025-08-29 17:44:11.503797 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:11.503808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:11.503838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:11.503849 | orchestrator | 2025-08-29 17:44:11.503860 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 17:44:11.503870 | orchestrator | Friday 29 August 2025 17:44:07 +0000 (0:00:00.178) 0:00:27.014 ********* 2025-08-29 17:44:11.503881 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:11.503894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:11.503906 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:11.503918 | orchestrator | 2025-08-29 17:44:11.503930 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 17:44:11.503942 | orchestrator | Friday 29 August 2025 17:44:07 +0000 (0:00:00.456) 0:00:27.470 ********* 2025-08-29 17:44:11.503954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'})  2025-08-29 17:44:11.503967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'})  2025-08-29 17:44:11.503979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:44:11.503991 | orchestrator | 2025-08-29 17:44:11.504003 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 17:44:11.504015 | orchestrator | Friday 29 August 2025 17:44:07 +0000 (0:00:00.188) 0:00:27.659 ********* 2025-08-29 17:44:11.504027 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 17:44:11.504039 | orchestrator |  "lvm_report": { 2025-08-29 17:44:11.504052 | orchestrator |  "lv": [ 2025-08-29 17:44:11.504064 | orchestrator |  { 2025-08-29 17:44:11.504090 | orchestrator |  "lv_name": "osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd", 2025-08-29 17:44:11.504103 | orchestrator |  "vg_name": "ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd" 2025-08-29 17:44:11.504115 | orchestrator |  }, 2025-08-29 17:44:11.504127 | orchestrator |  { 2025-08-29 17:44:11.504139 | orchestrator |  "lv_name": "osd-block-ab048149-1b6d-515a-8df0-d9a146565eca", 2025-08-29 17:44:11.504150 | orchestrator |  "vg_name": "ceph-ab048149-1b6d-515a-8df0-d9a146565eca" 2025-08-29 17:44:11.504170 | orchestrator |  } 2025-08-29 17:44:11.504188 | orchestrator |  ], 2025-08-29 17:44:11.504206 | orchestrator |  "pv": [ 2025-08-29 17:44:11.504223 | orchestrator |  { 2025-08-29 17:44:11.504241 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 17:44:11.504259 | orchestrator |  "vg_name": "ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd" 2025-08-29 17:44:11.504278 | orchestrator |  }, 2025-08-29 17:44:11.504297 | orchestrator |  { 2025-08-29 17:44:11.504316 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 17:44:11.504334 | orchestrator |  "vg_name": "ceph-ab048149-1b6d-515a-8df0-d9a146565eca" 2025-08-29 17:44:11.504352 | orchestrator |  } 2025-08-29 17:44:11.504370 | orchestrator |  ] 2025-08-29 17:44:11.504389 | orchestrator |  } 2025-08-29 17:44:11.504407 | orchestrator | } 2025-08-29 17:44:11.504426 | orchestrator | 2025-08-29 17:44:11.504468 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 17:44:11.504489 | orchestrator | 2025-08-29 17:44:11.504508 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:44:11.504526 | orchestrator | Friday 29 August 2025 17:44:08 +0000 (0:00:00.385) 0:00:28.044 ********* 2025-08-29 17:44:11.504544 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 17:44:11.504563 | orchestrator | 2025-08-29 17:44:11.504596 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:44:11.504615 | orchestrator | Friday 29 August 2025 17:44:08 +0000 (0:00:00.308) 0:00:28.353 ********* 2025-08-29 17:44:11.504631 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:11.504642 | orchestrator | 2025-08-29 17:44:11.504653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.504663 | orchestrator | Friday 29 August 2025 17:44:08 +0000 (0:00:00.245) 0:00:28.599 ********* 2025-08-29 17:44:11.504689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:44:11.504700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:44:11.504710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:44:11.504721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:44:11.504731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:44:11.504742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:44:11.504752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:44:11.504763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:44:11.504778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 17:44:11.504788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:44:11.504799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:44:11.504810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:44:11.504821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:44:11.504831 | orchestrator | 2025-08-29 17:44:11.504841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.504852 | orchestrator | Friday 29 August 2025 17:44:09 +0000 (0:00:00.533) 0:00:29.132 ********* 2025-08-29 17:44:11.504862 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.504873 | orchestrator | 2025-08-29 17:44:11.504883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.504894 | orchestrator | Friday 29 August 2025 17:44:09 +0000 (0:00:00.248) 0:00:29.381 ********* 2025-08-29 17:44:11.504904 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.504914 | orchestrator | 2025-08-29 17:44:11.504925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.504935 | orchestrator | Friday 29 August 2025 17:44:09 +0000 (0:00:00.212) 0:00:29.593 ********* 2025-08-29 17:44:11.504945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.504956 | orchestrator | 2025-08-29 17:44:11.504966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.504977 | orchestrator | Friday 29 August 2025 17:44:10 +0000 (0:00:00.210) 0:00:29.804 ********* 2025-08-29 17:44:11.504987 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.504997 | orchestrator | 2025-08-29 17:44:11.505008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.505018 | orchestrator | Friday 29 August 2025 17:44:10 +0000 (0:00:00.693) 0:00:30.497 ********* 2025-08-29 17:44:11.505029 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.505039 | orchestrator | 2025-08-29 17:44:11.505049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.505060 | orchestrator | Friday 29 August 2025 17:44:11 +0000 (0:00:00.245) 0:00:30.742 ********* 2025-08-29 17:44:11.505070 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.505080 | orchestrator | 2025-08-29 17:44:11.505091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:11.505107 | orchestrator | Friday 29 August 2025 17:44:11 +0000 (0:00:00.229) 0:00:30.972 ********* 2025-08-29 17:44:11.505118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:11.505129 | orchestrator | 2025-08-29 17:44:11.505149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:24.347027 | orchestrator | Friday 29 August 2025 17:44:11 +0000 (0:00:00.214) 0:00:31.186 ********* 2025-08-29 17:44:24.347157 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347172 | orchestrator | 2025-08-29 17:44:24.347184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:24.347194 | orchestrator | Friday 29 August 2025 17:44:11 +0000 (0:00:00.237) 0:00:31.424 ********* 2025-08-29 17:44:24.347205 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa) 2025-08-29 17:44:24.347216 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa) 2025-08-29 17:44:24.347226 | orchestrator | 2025-08-29 17:44:24.347236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:24.347246 | orchestrator | Friday 29 August 2025 17:44:12 +0000 (0:00:00.558) 0:00:31.982 ********* 2025-08-29 17:44:24.347256 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f) 2025-08-29 17:44:24.347265 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f) 2025-08-29 17:44:24.347275 | orchestrator | 2025-08-29 17:44:24.347284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:24.347294 | orchestrator | Friday 29 August 2025 17:44:12 +0000 (0:00:00.560) 0:00:32.543 ********* 2025-08-29 17:44:24.347303 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87) 2025-08-29 17:44:24.347313 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87) 2025-08-29 17:44:24.347322 | orchestrator | 2025-08-29 17:44:24.347332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:24.347341 | orchestrator | Friday 29 August 2025 17:44:13 +0000 (0:00:00.493) 0:00:33.037 ********* 2025-08-29 17:44:24.347350 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e) 2025-08-29 17:44:24.347360 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e) 2025-08-29 17:44:24.347370 | orchestrator | 2025-08-29 17:44:24.347379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:24.347389 | orchestrator | Friday 29 August 2025 17:44:13 +0000 (0:00:00.486) 0:00:33.523 ********* 2025-08-29 17:44:24.347398 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:44:24.347408 | orchestrator | 2025-08-29 17:44:24.347417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347427 | orchestrator | Friday 29 August 2025 17:44:14 +0000 (0:00:00.363) 0:00:33.887 ********* 2025-08-29 17:44:24.347436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 17:44:24.347492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 17:44:24.347503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 17:44:24.347514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 17:44:24.347525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 17:44:24.347536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 17:44:24.347546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 17:44:24.347585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 17:44:24.347596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 17:44:24.347607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 17:44:24.347618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 17:44:24.347629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 17:44:24.347639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 17:44:24.347650 | orchestrator | 2025-08-29 17:44:24.347661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347672 | orchestrator | Friday 29 August 2025 17:44:14 +0000 (0:00:00.739) 0:00:34.627 ********* 2025-08-29 17:44:24.347682 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347693 | orchestrator | 2025-08-29 17:44:24.347704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347714 | orchestrator | Friday 29 August 2025 17:44:15 +0000 (0:00:00.287) 0:00:34.914 ********* 2025-08-29 17:44:24.347724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347734 | orchestrator | 2025-08-29 17:44:24.347746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347756 | orchestrator | Friday 29 August 2025 17:44:15 +0000 (0:00:00.251) 0:00:35.165 ********* 2025-08-29 17:44:24.347767 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347777 | orchestrator | 2025-08-29 17:44:24.347788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347798 | orchestrator | Friday 29 August 2025 17:44:15 +0000 (0:00:00.195) 0:00:35.362 ********* 2025-08-29 17:44:24.347809 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347819 | orchestrator | 2025-08-29 17:44:24.347850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347861 | orchestrator | Friday 29 August 2025 17:44:15 +0000 (0:00:00.270) 0:00:35.633 ********* 2025-08-29 17:44:24.347872 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347883 | orchestrator | 2025-08-29 17:44:24.347893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347903 | orchestrator | Friday 29 August 2025 17:44:16 +0000 (0:00:00.220) 0:00:35.853 ********* 2025-08-29 17:44:24.347913 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347922 | orchestrator | 2025-08-29 17:44:24.347932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347941 | orchestrator | Friday 29 August 2025 17:44:16 +0000 (0:00:00.249) 0:00:36.102 ********* 2025-08-29 17:44:24.347951 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347960 | orchestrator | 2025-08-29 17:44:24.347970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.347979 | orchestrator | Friday 29 August 2025 17:44:16 +0000 (0:00:00.345) 0:00:36.448 ********* 2025-08-29 17:44:24.347988 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.347998 | orchestrator | 2025-08-29 17:44:24.348007 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.348017 | orchestrator | Friday 29 August 2025 17:44:17 +0000 (0:00:00.389) 0:00:36.838 ********* 2025-08-29 17:44:24.348026 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 17:44:24.348036 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 17:44:24.348046 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 17:44:24.348056 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 17:44:24.348065 | orchestrator | 2025-08-29 17:44:24.348075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.348085 | orchestrator | Friday 29 August 2025 17:44:18 +0000 (0:00:01.362) 0:00:38.201 ********* 2025-08-29 17:44:24.348103 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.348112 | orchestrator | 2025-08-29 17:44:24.348122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.348132 | orchestrator | Friday 29 August 2025 17:44:18 +0000 (0:00:00.264) 0:00:38.465 ********* 2025-08-29 17:44:24.348141 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.348151 | orchestrator | 2025-08-29 17:44:24.348160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.348169 | orchestrator | Friday 29 August 2025 17:44:19 +0000 (0:00:00.270) 0:00:38.736 ********* 2025-08-29 17:44:24.348179 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.348188 | orchestrator | 2025-08-29 17:44:24.348198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:24.348207 | orchestrator | Friday 29 August 2025 17:44:19 +0000 (0:00:00.781) 0:00:39.518 ********* 2025-08-29 17:44:24.348217 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.348227 | orchestrator | 2025-08-29 17:44:24.348236 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 17:44:24.348246 | orchestrator | Friday 29 August 2025 17:44:20 +0000 (0:00:00.278) 0:00:39.796 ********* 2025-08-29 17:44:24.348255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.348265 | orchestrator | 2025-08-29 17:44:24.348274 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 17:44:24.348284 | orchestrator | Friday 29 August 2025 17:44:20 +0000 (0:00:00.181) 0:00:39.978 ********* 2025-08-29 17:44:24.348294 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}}) 2025-08-29 17:44:24.348304 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '90167df7-514b-5586-921e-4d7a2964fdd2'}}) 2025-08-29 17:44:24.348314 | orchestrator | 2025-08-29 17:44:24.348323 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 17:44:24.348333 | orchestrator | Friday 29 August 2025 17:44:20 +0000 (0:00:00.259) 0:00:40.237 ********* 2025-08-29 17:44:24.348344 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}) 2025-08-29 17:44:24.348356 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'}) 2025-08-29 17:44:24.348366 | orchestrator | 2025-08-29 17:44:24.348375 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 17:44:24.348385 | orchestrator | Friday 29 August 2025 17:44:22 +0000 (0:00:02.043) 0:00:42.280 ********* 2025-08-29 17:44:24.348395 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:24.348406 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:24.348416 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:24.348425 | orchestrator | 2025-08-29 17:44:24.348435 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 17:44:24.348445 | orchestrator | Friday 29 August 2025 17:44:22 +0000 (0:00:00.190) 0:00:42.471 ********* 2025-08-29 17:44:24.348473 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}) 2025-08-29 17:44:24.348483 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'}) 2025-08-29 17:44:24.348493 | orchestrator | 2025-08-29 17:44:24.348508 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 17:44:31.243414 | orchestrator | Friday 29 August 2025 17:44:24 +0000 (0:00:01.547) 0:00:44.019 ********* 2025-08-29 17:44:31.243676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.243708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.243726 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.243744 | orchestrator | 2025-08-29 17:44:31.243762 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 17:44:31.243780 | orchestrator | Friday 29 August 2025 17:44:24 +0000 (0:00:00.179) 0:00:44.199 ********* 2025-08-29 17:44:31.243797 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.243816 | orchestrator | 2025-08-29 17:44:31.243834 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 17:44:31.243852 | orchestrator | Friday 29 August 2025 17:44:24 +0000 (0:00:00.204) 0:00:44.403 ********* 2025-08-29 17:44:31.243869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.243909 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.243929 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.243947 | orchestrator | 2025-08-29 17:44:31.243964 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 17:44:31.243981 | orchestrator | Friday 29 August 2025 17:44:24 +0000 (0:00:00.190) 0:00:44.594 ********* 2025-08-29 17:44:31.243997 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244013 | orchestrator | 2025-08-29 17:44:31.244031 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 17:44:31.244047 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:00.154) 0:00:44.749 ********* 2025-08-29 17:44:31.244063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.244079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.244096 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244112 | orchestrator | 2025-08-29 17:44:31.244127 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 17:44:31.244144 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:00.174) 0:00:44.923 ********* 2025-08-29 17:44:31.244155 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244164 | orchestrator | 2025-08-29 17:44:31.244179 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 17:44:31.244189 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:00.386) 0:00:45.310 ********* 2025-08-29 17:44:31.244198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.244208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.244218 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244227 | orchestrator | 2025-08-29 17:44:31.244236 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 17:44:31.244246 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:00.188) 0:00:45.498 ********* 2025-08-29 17:44:31.244255 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:31.244266 | orchestrator | 2025-08-29 17:44:31.244275 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 17:44:31.244285 | orchestrator | Friday 29 August 2025 17:44:25 +0000 (0:00:00.177) 0:00:45.675 ********* 2025-08-29 17:44:31.244305 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.244315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.244325 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244334 | orchestrator | 2025-08-29 17:44:31.244344 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 17:44:31.244353 | orchestrator | Friday 29 August 2025 17:44:26 +0000 (0:00:00.211) 0:00:45.886 ********* 2025-08-29 17:44:31.244363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.244373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.244382 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244391 | orchestrator | 2025-08-29 17:44:31.244401 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 17:44:31.244410 | orchestrator | Friday 29 August 2025 17:44:26 +0000 (0:00:00.258) 0:00:46.145 ********* 2025-08-29 17:44:31.244442 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:31.244486 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:31.244504 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244520 | orchestrator | 2025-08-29 17:44:31.244537 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 17:44:31.244554 | orchestrator | Friday 29 August 2025 17:44:26 +0000 (0:00:00.271) 0:00:46.416 ********* 2025-08-29 17:44:31.244570 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244586 | orchestrator | 2025-08-29 17:44:31.244596 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 17:44:31.244605 | orchestrator | Friday 29 August 2025 17:44:26 +0000 (0:00:00.182) 0:00:46.599 ********* 2025-08-29 17:44:31.244614 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244624 | orchestrator | 2025-08-29 17:44:31.244634 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 17:44:31.244643 | orchestrator | Friday 29 August 2025 17:44:27 +0000 (0:00:00.186) 0:00:46.785 ********* 2025-08-29 17:44:31.244653 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.244662 | orchestrator | 2025-08-29 17:44:31.244671 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 17:44:31.244681 | orchestrator | Friday 29 August 2025 17:44:27 +0000 (0:00:00.214) 0:00:47.000 ********* 2025-08-29 17:44:31.244690 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:44:31.244700 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 17:44:31.244710 | orchestrator | } 2025-08-29 17:44:31.244720 | orchestrator | 2025-08-29 17:44:31.244730 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 17:44:31.244739 | orchestrator | Friday 29 August 2025 17:44:27 +0000 (0:00:00.198) 0:00:47.199 ********* 2025-08-29 17:44:31.244749 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:44:31.244758 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 17:44:31.244768 | orchestrator | } 2025-08-29 17:44:31.244777 | orchestrator | 2025-08-29 17:44:31.244787 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 17:44:31.244796 | orchestrator | Friday 29 August 2025 17:44:27 +0000 (0:00:00.189) 0:00:47.388 ********* 2025-08-29 17:44:31.244806 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:44:31.244816 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 17:44:31.244825 | orchestrator | } 2025-08-29 17:44:31.244877 | orchestrator | 2025-08-29 17:44:31.244887 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 17:44:31.244897 | orchestrator | Friday 29 August 2025 17:44:27 +0000 (0:00:00.158) 0:00:47.547 ********* 2025-08-29 17:44:31.244907 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:31.244916 | orchestrator | 2025-08-29 17:44:31.244926 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 17:44:31.244935 | orchestrator | Friday 29 August 2025 17:44:28 +0000 (0:00:00.842) 0:00:48.390 ********* 2025-08-29 17:44:31.244945 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:31.244954 | orchestrator | 2025-08-29 17:44:31.244973 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 17:44:31.244987 | orchestrator | Friday 29 August 2025 17:44:29 +0000 (0:00:00.549) 0:00:48.939 ********* 2025-08-29 17:44:31.244997 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:31.245006 | orchestrator | 2025-08-29 17:44:31.245016 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 17:44:31.245025 | orchestrator | Friday 29 August 2025 17:44:29 +0000 (0:00:00.558) 0:00:49.498 ********* 2025-08-29 17:44:31.245035 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:31.245044 | orchestrator | 2025-08-29 17:44:31.245053 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 17:44:31.245061 | orchestrator | Friday 29 August 2025 17:44:30 +0000 (0:00:00.199) 0:00:49.698 ********* 2025-08-29 17:44:31.245069 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.245076 | orchestrator | 2025-08-29 17:44:31.245084 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 17:44:31.245092 | orchestrator | Friday 29 August 2025 17:44:30 +0000 (0:00:00.167) 0:00:49.865 ********* 2025-08-29 17:44:31.245100 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.245107 | orchestrator | 2025-08-29 17:44:31.245115 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 17:44:31.245123 | orchestrator | Friday 29 August 2025 17:44:30 +0000 (0:00:00.208) 0:00:50.074 ********* 2025-08-29 17:44:31.245130 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:44:31.245138 | orchestrator |  "vgs_report": { 2025-08-29 17:44:31.245146 | orchestrator |  "vg": [] 2025-08-29 17:44:31.245154 | orchestrator |  } 2025-08-29 17:44:31.245162 | orchestrator | } 2025-08-29 17:44:31.245169 | orchestrator | 2025-08-29 17:44:31.245177 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 17:44:31.245185 | orchestrator | Friday 29 August 2025 17:44:30 +0000 (0:00:00.154) 0:00:50.228 ********* 2025-08-29 17:44:31.245192 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.245200 | orchestrator | 2025-08-29 17:44:31.245208 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 17:44:31.245215 | orchestrator | Friday 29 August 2025 17:44:30 +0000 (0:00:00.193) 0:00:50.421 ********* 2025-08-29 17:44:31.245223 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.245231 | orchestrator | 2025-08-29 17:44:31.245238 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 17:44:31.245246 | orchestrator | Friday 29 August 2025 17:44:30 +0000 (0:00:00.198) 0:00:50.620 ********* 2025-08-29 17:44:31.245254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.245262 | orchestrator | 2025-08-29 17:44:31.245269 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 17:44:31.245277 | orchestrator | Friday 29 August 2025 17:44:31 +0000 (0:00:00.149) 0:00:50.769 ********* 2025-08-29 17:44:31.245285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:31.245292 | orchestrator | 2025-08-29 17:44:31.245300 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 17:44:31.245315 | orchestrator | Friday 29 August 2025 17:44:31 +0000 (0:00:00.154) 0:00:50.924 ********* 2025-08-29 17:44:36.668820 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.668912 | orchestrator | 2025-08-29 17:44:36.668928 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 17:44:36.668960 | orchestrator | Friday 29 August 2025 17:44:31 +0000 (0:00:00.154) 0:00:51.078 ********* 2025-08-29 17:44:36.668983 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669002 | orchestrator | 2025-08-29 17:44:36.669021 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 17:44:36.669043 | orchestrator | Friday 29 August 2025 17:44:31 +0000 (0:00:00.389) 0:00:51.467 ********* 2025-08-29 17:44:36.669064 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669084 | orchestrator | 2025-08-29 17:44:36.669105 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 17:44:36.669125 | orchestrator | Friday 29 August 2025 17:44:31 +0000 (0:00:00.164) 0:00:51.632 ********* 2025-08-29 17:44:36.669147 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669169 | orchestrator | 2025-08-29 17:44:36.669189 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 17:44:36.669201 | orchestrator | Friday 29 August 2025 17:44:32 +0000 (0:00:00.193) 0:00:51.826 ********* 2025-08-29 17:44:36.669211 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669222 | orchestrator | 2025-08-29 17:44:36.669233 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 17:44:36.669243 | orchestrator | Friday 29 August 2025 17:44:32 +0000 (0:00:00.177) 0:00:52.003 ********* 2025-08-29 17:44:36.669254 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669264 | orchestrator | 2025-08-29 17:44:36.669275 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 17:44:36.669285 | orchestrator | Friday 29 August 2025 17:44:32 +0000 (0:00:00.158) 0:00:52.162 ********* 2025-08-29 17:44:36.669296 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669306 | orchestrator | 2025-08-29 17:44:36.669316 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 17:44:36.669327 | orchestrator | Friday 29 August 2025 17:44:32 +0000 (0:00:00.151) 0:00:52.313 ********* 2025-08-29 17:44:36.669337 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669348 | orchestrator | 2025-08-29 17:44:36.669358 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 17:44:36.669369 | orchestrator | Friday 29 August 2025 17:44:32 +0000 (0:00:00.161) 0:00:52.474 ********* 2025-08-29 17:44:36.669381 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669393 | orchestrator | 2025-08-29 17:44:36.669405 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 17:44:36.669417 | orchestrator | Friday 29 August 2025 17:44:32 +0000 (0:00:00.201) 0:00:52.675 ********* 2025-08-29 17:44:36.669429 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669441 | orchestrator | 2025-08-29 17:44:36.669480 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 17:44:36.669501 | orchestrator | Friday 29 August 2025 17:44:33 +0000 (0:00:00.162) 0:00:52.838 ********* 2025-08-29 17:44:36.669537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.669556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.669569 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669581 | orchestrator | 2025-08-29 17:44:36.669593 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 17:44:36.669605 | orchestrator | Friday 29 August 2025 17:44:33 +0000 (0:00:00.188) 0:00:53.026 ********* 2025-08-29 17:44:36.669617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.669629 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.669651 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669663 | orchestrator | 2025-08-29 17:44:36.669675 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 17:44:36.669688 | orchestrator | Friday 29 August 2025 17:44:33 +0000 (0:00:00.196) 0:00:53.223 ********* 2025-08-29 17:44:36.669700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.669712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.669724 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669734 | orchestrator | 2025-08-29 17:44:36.669752 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 17:44:36.669772 | orchestrator | Friday 29 August 2025 17:44:33 +0000 (0:00:00.187) 0:00:53.410 ********* 2025-08-29 17:44:36.669791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.669810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.669829 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669849 | orchestrator | 2025-08-29 17:44:36.669869 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 17:44:36.669913 | orchestrator | Friday 29 August 2025 17:44:34 +0000 (0:00:00.413) 0:00:53.824 ********* 2025-08-29 17:44:36.669926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.669937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.669948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.669967 | orchestrator | 2025-08-29 17:44:36.669987 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 17:44:36.670006 | orchestrator | Friday 29 August 2025 17:44:34 +0000 (0:00:00.161) 0:00:53.986 ********* 2025-08-29 17:44:36.670095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.670117 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.670136 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.670151 | orchestrator | 2025-08-29 17:44:36.670163 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 17:44:36.670174 | orchestrator | Friday 29 August 2025 17:44:34 +0000 (0:00:00.179) 0:00:54.166 ********* 2025-08-29 17:44:36.670184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.670195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.670206 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.670217 | orchestrator | 2025-08-29 17:44:36.670237 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 17:44:36.670256 | orchestrator | Friday 29 August 2025 17:44:34 +0000 (0:00:00.174) 0:00:54.340 ********* 2025-08-29 17:44:36.670276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.670295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.670329 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.670349 | orchestrator | 2025-08-29 17:44:36.670366 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 17:44:36.670421 | orchestrator | Friday 29 August 2025 17:44:34 +0000 (0:00:00.152) 0:00:54.492 ********* 2025-08-29 17:44:36.670433 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:36.670444 | orchestrator | 2025-08-29 17:44:36.670479 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 17:44:36.670491 | orchestrator | Friday 29 August 2025 17:44:35 +0000 (0:00:00.599) 0:00:55.092 ********* 2025-08-29 17:44:36.670501 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:36.670512 | orchestrator | 2025-08-29 17:44:36.670522 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 17:44:36.670533 | orchestrator | Friday 29 August 2025 17:44:35 +0000 (0:00:00.528) 0:00:55.621 ********* 2025-08-29 17:44:36.670546 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:44:36.670565 | orchestrator | 2025-08-29 17:44:36.670584 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 17:44:36.670604 | orchestrator | Friday 29 August 2025 17:44:36 +0000 (0:00:00.145) 0:00:55.767 ********* 2025-08-29 17:44:36.670623 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'vg_name': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}) 2025-08-29 17:44:36.670642 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'vg_name': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'}) 2025-08-29 17:44:36.670661 | orchestrator | 2025-08-29 17:44:36.670682 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 17:44:36.670702 | orchestrator | Friday 29 August 2025 17:44:36 +0000 (0:00:00.226) 0:00:55.994 ********* 2025-08-29 17:44:36.670720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.670738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.670749 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:36.670760 | orchestrator | 2025-08-29 17:44:36.670772 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 17:44:36.670791 | orchestrator | Friday 29 August 2025 17:44:36 +0000 (0:00:00.174) 0:00:56.168 ********* 2025-08-29 17:44:36.670811 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:36.670831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:36.670865 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:43.881567 | orchestrator | 2025-08-29 17:44:43.881663 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 17:44:43.881681 | orchestrator | Friday 29 August 2025 17:44:36 +0000 (0:00:00.186) 0:00:56.355 ********* 2025-08-29 17:44:43.881694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'})  2025-08-29 17:44:43.881707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'})  2025-08-29 17:44:43.881717 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:44:43.881742 | orchestrator | 2025-08-29 17:44:43.881754 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 17:44:43.881764 | orchestrator | Friday 29 August 2025 17:44:36 +0000 (0:00:00.174) 0:00:56.529 ********* 2025-08-29 17:44:43.881796 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 17:44:43.881808 | orchestrator |  "lvm_report": { 2025-08-29 17:44:43.881819 | orchestrator |  "lv": [ 2025-08-29 17:44:43.881831 | orchestrator |  { 2025-08-29 17:44:43.881842 | orchestrator |  "lv_name": "osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129", 2025-08-29 17:44:43.881853 | orchestrator |  "vg_name": "ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129" 2025-08-29 17:44:43.881864 | orchestrator |  }, 2025-08-29 17:44:43.881874 | orchestrator |  { 2025-08-29 17:44:43.881885 | orchestrator |  "lv_name": "osd-block-90167df7-514b-5586-921e-4d7a2964fdd2", 2025-08-29 17:44:43.881896 | orchestrator |  "vg_name": "ceph-90167df7-514b-5586-921e-4d7a2964fdd2" 2025-08-29 17:44:43.881906 | orchestrator |  } 2025-08-29 17:44:43.881917 | orchestrator |  ], 2025-08-29 17:44:43.881928 | orchestrator |  "pv": [ 2025-08-29 17:44:43.881938 | orchestrator |  { 2025-08-29 17:44:43.881949 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 17:44:43.881960 | orchestrator |  "vg_name": "ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129" 2025-08-29 17:44:43.881970 | orchestrator |  }, 2025-08-29 17:44:43.881981 | orchestrator |  { 2025-08-29 17:44:43.881992 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 17:44:43.882002 | orchestrator |  "vg_name": "ceph-90167df7-514b-5586-921e-4d7a2964fdd2" 2025-08-29 17:44:43.882013 | orchestrator |  } 2025-08-29 17:44:43.882075 | orchestrator |  ] 2025-08-29 17:44:43.882087 | orchestrator |  } 2025-08-29 17:44:43.882098 | orchestrator | } 2025-08-29 17:44:43.882117 | orchestrator | 2025-08-29 17:44:43.882136 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 17:44:43.882153 | orchestrator | 2025-08-29 17:44:43.882172 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 17:44:43.882190 | orchestrator | Friday 29 August 2025 17:44:37 +0000 (0:00:00.566) 0:00:57.096 ********* 2025-08-29 17:44:43.882209 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 17:44:43.882227 | orchestrator | 2025-08-29 17:44:43.882262 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 17:44:43.882275 | orchestrator | Friday 29 August 2025 17:44:37 +0000 (0:00:00.273) 0:00:57.369 ********* 2025-08-29 17:44:43.882287 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:44:43.882299 | orchestrator | 2025-08-29 17:44:43.882312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882324 | orchestrator | Friday 29 August 2025 17:44:37 +0000 (0:00:00.236) 0:00:57.606 ********* 2025-08-29 17:44:43.882336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:44:43.882348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:44:43.882360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:44:43.882371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:44:43.882383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:44:43.882395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:44:43.882407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:44:43.882420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:44:43.882432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 17:44:43.882443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:44:43.882476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:44:43.882498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:44:43.882509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:44:43.882519 | orchestrator | 2025-08-29 17:44:43.882530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882540 | orchestrator | Friday 29 August 2025 17:44:38 +0000 (0:00:00.522) 0:00:58.128 ********* 2025-08-29 17:44:43.882551 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882561 | orchestrator | 2025-08-29 17:44:43.882576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882587 | orchestrator | Friday 29 August 2025 17:44:38 +0000 (0:00:00.235) 0:00:58.364 ********* 2025-08-29 17:44:43.882597 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882608 | orchestrator | 2025-08-29 17:44:43.882619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882647 | orchestrator | Friday 29 August 2025 17:44:38 +0000 (0:00:00.236) 0:00:58.601 ********* 2025-08-29 17:44:43.882659 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882669 | orchestrator | 2025-08-29 17:44:43.882680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882690 | orchestrator | Friday 29 August 2025 17:44:39 +0000 (0:00:00.247) 0:00:58.848 ********* 2025-08-29 17:44:43.882701 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882711 | orchestrator | 2025-08-29 17:44:43.882722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882732 | orchestrator | Friday 29 August 2025 17:44:39 +0000 (0:00:00.224) 0:00:59.073 ********* 2025-08-29 17:44:43.882743 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882753 | orchestrator | 2025-08-29 17:44:43.882764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882774 | orchestrator | Friday 29 August 2025 17:44:39 +0000 (0:00:00.212) 0:00:59.286 ********* 2025-08-29 17:44:43.882785 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882795 | orchestrator | 2025-08-29 17:44:43.882806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882816 | orchestrator | Friday 29 August 2025 17:44:40 +0000 (0:00:00.760) 0:01:00.046 ********* 2025-08-29 17:44:43.882827 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882837 | orchestrator | 2025-08-29 17:44:43.882848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882858 | orchestrator | Friday 29 August 2025 17:44:40 +0000 (0:00:00.226) 0:01:00.272 ********* 2025-08-29 17:44:43.882869 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:43.882879 | orchestrator | 2025-08-29 17:44:43.882890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882900 | orchestrator | Friday 29 August 2025 17:44:40 +0000 (0:00:00.259) 0:01:00.532 ********* 2025-08-29 17:44:43.882911 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e) 2025-08-29 17:44:43.882922 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e) 2025-08-29 17:44:43.882933 | orchestrator | 2025-08-29 17:44:43.882944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.882954 | orchestrator | Friday 29 August 2025 17:44:41 +0000 (0:00:00.466) 0:01:00.998 ********* 2025-08-29 17:44:43.882964 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c) 2025-08-29 17:44:43.882975 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c) 2025-08-29 17:44:43.882986 | orchestrator | 2025-08-29 17:44:43.882996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.883007 | orchestrator | Friday 29 August 2025 17:44:41 +0000 (0:00:00.522) 0:01:01.520 ********* 2025-08-29 17:44:43.883023 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527) 2025-08-29 17:44:43.883040 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527) 2025-08-29 17:44:43.883050 | orchestrator | 2025-08-29 17:44:43.883061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.883071 | orchestrator | Friday 29 August 2025 17:44:42 +0000 (0:00:00.539) 0:01:02.059 ********* 2025-08-29 17:44:43.883082 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0) 2025-08-29 17:44:43.883092 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0) 2025-08-29 17:44:43.883103 | orchestrator | 2025-08-29 17:44:43.883113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 17:44:43.883124 | orchestrator | Friday 29 August 2025 17:44:42 +0000 (0:00:00.480) 0:01:02.540 ********* 2025-08-29 17:44:43.883134 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 17:44:43.883145 | orchestrator | 2025-08-29 17:44:43.883155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:43.883166 | orchestrator | Friday 29 August 2025 17:44:43 +0000 (0:00:00.448) 0:01:02.989 ********* 2025-08-29 17:44:43.883176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 17:44:43.883187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 17:44:43.883197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 17:44:43.883208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 17:44:43.883218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 17:44:43.883229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 17:44:43.883241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 17:44:43.883259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 17:44:43.883277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 17:44:43.883295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 17:44:43.883314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 17:44:43.883341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 17:44:53.687199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 17:44:53.687771 | orchestrator | 2025-08-29 17:44:53.687794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.687804 | orchestrator | Friday 29 August 2025 17:44:43 +0000 (0:00:00.573) 0:01:03.562 ********* 2025-08-29 17:44:53.687813 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.687822 | orchestrator | 2025-08-29 17:44:53.687831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.687840 | orchestrator | Friday 29 August 2025 17:44:44 +0000 (0:00:00.218) 0:01:03.781 ********* 2025-08-29 17:44:53.687849 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.687857 | orchestrator | 2025-08-29 17:44:53.687865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.687873 | orchestrator | Friday 29 August 2025 17:44:44 +0000 (0:00:00.228) 0:01:04.009 ********* 2025-08-29 17:44:53.687881 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.687888 | orchestrator | 2025-08-29 17:44:53.687897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.687905 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:00:00.697) 0:01:04.707 ********* 2025-08-29 17:44:53.687927 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.687936 | orchestrator | 2025-08-29 17:44:53.687944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.687952 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:00:00.235) 0:01:04.942 ********* 2025-08-29 17:44:53.687960 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.687968 | orchestrator | 2025-08-29 17:44:53.687976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.687984 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:00:00.278) 0:01:05.220 ********* 2025-08-29 17:44:53.687992 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688000 | orchestrator | 2025-08-29 17:44:53.688008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688016 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:00:00.232) 0:01:05.453 ********* 2025-08-29 17:44:53.688024 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688032 | orchestrator | 2025-08-29 17:44:53.688040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688048 | orchestrator | Friday 29 August 2025 17:44:45 +0000 (0:00:00.234) 0:01:05.688 ********* 2025-08-29 17:44:53.688056 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688064 | orchestrator | 2025-08-29 17:44:53.688072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688080 | orchestrator | Friday 29 August 2025 17:44:46 +0000 (0:00:00.219) 0:01:05.907 ********* 2025-08-29 17:44:53.688087 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 17:44:53.688095 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 17:44:53.688102 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 17:44:53.688110 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 17:44:53.688117 | orchestrator | 2025-08-29 17:44:53.688124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688131 | orchestrator | Friday 29 August 2025 17:44:46 +0000 (0:00:00.762) 0:01:06.669 ********* 2025-08-29 17:44:53.688139 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688146 | orchestrator | 2025-08-29 17:44:53.688153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688160 | orchestrator | Friday 29 August 2025 17:44:47 +0000 (0:00:00.268) 0:01:06.938 ********* 2025-08-29 17:44:53.688167 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688175 | orchestrator | 2025-08-29 17:44:53.688182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688189 | orchestrator | Friday 29 August 2025 17:44:47 +0000 (0:00:00.222) 0:01:07.161 ********* 2025-08-29 17:44:53.688197 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688204 | orchestrator | 2025-08-29 17:44:53.688211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 17:44:53.688218 | orchestrator | Friday 29 August 2025 17:44:47 +0000 (0:00:00.254) 0:01:07.415 ********* 2025-08-29 17:44:53.688225 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688232 | orchestrator | 2025-08-29 17:44:53.688239 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 17:44:53.688247 | orchestrator | Friday 29 August 2025 17:44:47 +0000 (0:00:00.199) 0:01:07.615 ********* 2025-08-29 17:44:53.688254 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688261 | orchestrator | 2025-08-29 17:44:53.688268 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 17:44:53.688275 | orchestrator | Friday 29 August 2025 17:44:48 +0000 (0:00:00.396) 0:01:08.012 ********* 2025-08-29 17:44:53.688282 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1b4aa328-f83b-56f5-ada4-b8257b659e12'}}) 2025-08-29 17:44:53.688289 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '756a9a3b-59dc-526e-9851-f6b5408065e4'}}) 2025-08-29 17:44:53.688301 | orchestrator | 2025-08-29 17:44:53.688309 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 17:44:53.688316 | orchestrator | Friday 29 August 2025 17:44:48 +0000 (0:00:00.225) 0:01:08.237 ********* 2025-08-29 17:44:53.688324 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'}) 2025-08-29 17:44:53.688356 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'}) 2025-08-29 17:44:53.688364 | orchestrator | 2025-08-29 17:44:53.688371 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 17:44:53.688392 | orchestrator | Friday 29 August 2025 17:44:50 +0000 (0:00:01.931) 0:01:10.169 ********* 2025-08-29 17:44:53.688400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:44:53.688408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:44:53.688416 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688423 | orchestrator | 2025-08-29 17:44:53.688430 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 17:44:53.688437 | orchestrator | Friday 29 August 2025 17:44:50 +0000 (0:00:00.158) 0:01:10.327 ********* 2025-08-29 17:44:53.688444 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'}) 2025-08-29 17:44:53.688476 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'}) 2025-08-29 17:44:53.688485 | orchestrator | 2025-08-29 17:44:53.688493 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 17:44:53.688500 | orchestrator | Friday 29 August 2025 17:44:51 +0000 (0:00:01.356) 0:01:11.684 ********* 2025-08-29 17:44:53.688507 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:44:53.688514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:44:53.688522 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688529 | orchestrator | 2025-08-29 17:44:53.688536 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 17:44:53.688543 | orchestrator | Friday 29 August 2025 17:44:52 +0000 (0:00:00.176) 0:01:11.861 ********* 2025-08-29 17:44:53.688550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688557 | orchestrator | 2025-08-29 17:44:53.688564 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 17:44:53.688571 | orchestrator | Friday 29 August 2025 17:44:52 +0000 (0:00:00.153) 0:01:12.014 ********* 2025-08-29 17:44:53.688579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:44:53.688589 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:44:53.688596 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688603 | orchestrator | 2025-08-29 17:44:53.688611 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 17:44:53.688618 | orchestrator | Friday 29 August 2025 17:44:52 +0000 (0:00:00.154) 0:01:12.169 ********* 2025-08-29 17:44:53.688625 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688632 | orchestrator | 2025-08-29 17:44:53.688639 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 17:44:53.688651 | orchestrator | Friday 29 August 2025 17:44:52 +0000 (0:00:00.175) 0:01:12.344 ********* 2025-08-29 17:44:53.688659 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:44:53.688666 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:44:53.688673 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688680 | orchestrator | 2025-08-29 17:44:53.688687 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 17:44:53.688695 | orchestrator | Friday 29 August 2025 17:44:52 +0000 (0:00:00.168) 0:01:12.513 ********* 2025-08-29 17:44:53.688702 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688709 | orchestrator | 2025-08-29 17:44:53.688716 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 17:44:53.688723 | orchestrator | Friday 29 August 2025 17:44:52 +0000 (0:00:00.144) 0:01:12.658 ********* 2025-08-29 17:44:53.688730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:44:53.688738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:44:53.688745 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:44:53.688752 | orchestrator | 2025-08-29 17:44:53.688759 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 17:44:53.688766 | orchestrator | Friday 29 August 2025 17:44:53 +0000 (0:00:00.166) 0:01:12.824 ********* 2025-08-29 17:44:53.688773 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:44:53.688780 | orchestrator | 2025-08-29 17:44:53.688788 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 17:44:53.688795 | orchestrator | Friday 29 August 2025 17:44:53 +0000 (0:00:00.148) 0:01:12.973 ********* 2025-08-29 17:44:53.688807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:00.147813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:00.147897 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.147906 | orchestrator | 2025-08-29 17:45:00.147915 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 17:45:00.147924 | orchestrator | Friday 29 August 2025 17:44:53 +0000 (0:00:00.401) 0:01:13.375 ********* 2025-08-29 17:45:00.147932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:00.147940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:00.147948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.147956 | orchestrator | 2025-08-29 17:45:00.147963 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 17:45:00.147971 | orchestrator | Friday 29 August 2025 17:44:53 +0000 (0:00:00.177) 0:01:13.553 ********* 2025-08-29 17:45:00.147979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:00.147987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:00.147993 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.147999 | orchestrator | 2025-08-29 17:45:00.148022 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 17:45:00.148029 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.168) 0:01:13.721 ********* 2025-08-29 17:45:00.148036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148044 | orchestrator | 2025-08-29 17:45:00.148050 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 17:45:00.148058 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.153) 0:01:13.875 ********* 2025-08-29 17:45:00.148065 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148081 | orchestrator | 2025-08-29 17:45:00.148089 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 17:45:00.148096 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.164) 0:01:14.040 ********* 2025-08-29 17:45:00.148103 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148110 | orchestrator | 2025-08-29 17:45:00.148117 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 17:45:00.148137 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.145) 0:01:14.185 ********* 2025-08-29 17:45:00.148144 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:45:00.148152 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 17:45:00.148159 | orchestrator | } 2025-08-29 17:45:00.148166 | orchestrator | 2025-08-29 17:45:00.148173 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 17:45:00.148180 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.144) 0:01:14.330 ********* 2025-08-29 17:45:00.148186 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:45:00.148193 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 17:45:00.148199 | orchestrator | } 2025-08-29 17:45:00.148206 | orchestrator | 2025-08-29 17:45:00.148212 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 17:45:00.148218 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.131) 0:01:14.462 ********* 2025-08-29 17:45:00.148225 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:45:00.148232 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 17:45:00.148239 | orchestrator | } 2025-08-29 17:45:00.148245 | orchestrator | 2025-08-29 17:45:00.148251 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 17:45:00.148258 | orchestrator | Friday 29 August 2025 17:44:54 +0000 (0:00:00.160) 0:01:14.623 ********* 2025-08-29 17:45:00.148264 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:00.148271 | orchestrator | 2025-08-29 17:45:00.148277 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 17:45:00.148283 | orchestrator | Friday 29 August 2025 17:44:55 +0000 (0:00:00.540) 0:01:15.164 ********* 2025-08-29 17:45:00.148290 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:00.148296 | orchestrator | 2025-08-29 17:45:00.148302 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 17:45:00.148309 | orchestrator | Friday 29 August 2025 17:44:56 +0000 (0:00:00.537) 0:01:15.701 ********* 2025-08-29 17:45:00.148315 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:00.148322 | orchestrator | 2025-08-29 17:45:00.148328 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 17:45:00.148335 | orchestrator | Friday 29 August 2025 17:44:56 +0000 (0:00:00.528) 0:01:16.230 ********* 2025-08-29 17:45:00.148342 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:00.148348 | orchestrator | 2025-08-29 17:45:00.148355 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 17:45:00.148361 | orchestrator | Friday 29 August 2025 17:44:56 +0000 (0:00:00.385) 0:01:16.615 ********* 2025-08-29 17:45:00.148367 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148374 | orchestrator | 2025-08-29 17:45:00.148381 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 17:45:00.148389 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.121) 0:01:16.736 ********* 2025-08-29 17:45:00.148395 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148409 | orchestrator | 2025-08-29 17:45:00.148416 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 17:45:00.148423 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.140) 0:01:16.877 ********* 2025-08-29 17:45:00.148430 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:45:00.148437 | orchestrator |  "vgs_report": { 2025-08-29 17:45:00.148444 | orchestrator |  "vg": [] 2025-08-29 17:45:00.148494 | orchestrator |  } 2025-08-29 17:45:00.148503 | orchestrator | } 2025-08-29 17:45:00.148511 | orchestrator | 2025-08-29 17:45:00.148518 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 17:45:00.148525 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.150) 0:01:17.028 ********* 2025-08-29 17:45:00.148532 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148539 | orchestrator | 2025-08-29 17:45:00.148546 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 17:45:00.148553 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.145) 0:01:17.173 ********* 2025-08-29 17:45:00.148560 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148567 | orchestrator | 2025-08-29 17:45:00.148574 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 17:45:00.148581 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.154) 0:01:17.327 ********* 2025-08-29 17:45:00.148588 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148595 | orchestrator | 2025-08-29 17:45:00.148602 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 17:45:00.148608 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.136) 0:01:17.464 ********* 2025-08-29 17:45:00.148615 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148622 | orchestrator | 2025-08-29 17:45:00.148629 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 17:45:00.148636 | orchestrator | Friday 29 August 2025 17:44:57 +0000 (0:00:00.139) 0:01:17.604 ********* 2025-08-29 17:45:00.148643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148650 | orchestrator | 2025-08-29 17:45:00.148657 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 17:45:00.148664 | orchestrator | Friday 29 August 2025 17:44:58 +0000 (0:00:00.137) 0:01:17.742 ********* 2025-08-29 17:45:00.148671 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148678 | orchestrator | 2025-08-29 17:45:00.148685 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 17:45:00.148692 | orchestrator | Friday 29 August 2025 17:44:58 +0000 (0:00:00.139) 0:01:17.881 ********* 2025-08-29 17:45:00.148699 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148705 | orchestrator | 2025-08-29 17:45:00.148712 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 17:45:00.148718 | orchestrator | Friday 29 August 2025 17:44:58 +0000 (0:00:00.143) 0:01:18.025 ********* 2025-08-29 17:45:00.148724 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148730 | orchestrator | 2025-08-29 17:45:00.148736 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 17:45:00.148743 | orchestrator | Friday 29 August 2025 17:44:58 +0000 (0:00:00.147) 0:01:18.173 ********* 2025-08-29 17:45:00.148749 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148754 | orchestrator | 2025-08-29 17:45:00.148760 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 17:45:00.148767 | orchestrator | Friday 29 August 2025 17:44:58 +0000 (0:00:00.364) 0:01:18.537 ********* 2025-08-29 17:45:00.148778 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148785 | orchestrator | 2025-08-29 17:45:00.148791 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 17:45:00.148798 | orchestrator | Friday 29 August 2025 17:44:58 +0000 (0:00:00.152) 0:01:18.690 ********* 2025-08-29 17:45:00.148804 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148812 | orchestrator | 2025-08-29 17:45:00.148818 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 17:45:00.148832 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:00.166) 0:01:18.857 ********* 2025-08-29 17:45:00.148839 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148846 | orchestrator | 2025-08-29 17:45:00.148854 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 17:45:00.148861 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:00.150) 0:01:19.008 ********* 2025-08-29 17:45:00.148868 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148875 | orchestrator | 2025-08-29 17:45:00.148882 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 17:45:00.148889 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:00.143) 0:01:19.151 ********* 2025-08-29 17:45:00.148896 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148903 | orchestrator | 2025-08-29 17:45:00.148910 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 17:45:00.148917 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:00.139) 0:01:19.291 ********* 2025-08-29 17:45:00.148924 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:00.148932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:00.148939 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148946 | orchestrator | 2025-08-29 17:45:00.148953 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 17:45:00.148960 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:00.193) 0:01:19.484 ********* 2025-08-29 17:45:00.148967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:00.148974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:00.148981 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:00.148988 | orchestrator | 2025-08-29 17:45:00.148994 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 17:45:00.149000 | orchestrator | Friday 29 August 2025 17:44:59 +0000 (0:00:00.177) 0:01:19.662 ********* 2025-08-29 17:45:00.149013 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.580894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.580989 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581001 | orchestrator | 2025-08-29 17:45:03.581010 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 17:45:03.581019 | orchestrator | Friday 29 August 2025 17:45:00 +0000 (0:00:00.171) 0:01:19.834 ********* 2025-08-29 17:45:03.581028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581036 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581044 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581052 | orchestrator | 2025-08-29 17:45:03.581060 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 17:45:03.581068 | orchestrator | Friday 29 August 2025 17:45:00 +0000 (0:00:00.175) 0:01:20.010 ********* 2025-08-29 17:45:03.581076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581121 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581129 | orchestrator | 2025-08-29 17:45:03.581137 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 17:45:03.581145 | orchestrator | Friday 29 August 2025 17:45:00 +0000 (0:00:00.183) 0:01:20.193 ********* 2025-08-29 17:45:03.581153 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581168 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581176 | orchestrator | 2025-08-29 17:45:03.581183 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 17:45:03.581192 | orchestrator | Friday 29 August 2025 17:45:00 +0000 (0:00:00.170) 0:01:20.364 ********* 2025-08-29 17:45:03.581199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581215 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581223 | orchestrator | 2025-08-29 17:45:03.581231 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 17:45:03.581239 | orchestrator | Friday 29 August 2025 17:45:01 +0000 (0:00:00.441) 0:01:20.805 ********* 2025-08-29 17:45:03.581247 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581263 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581270 | orchestrator | 2025-08-29 17:45:03.581278 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 17:45:03.581286 | orchestrator | Friday 29 August 2025 17:45:01 +0000 (0:00:00.186) 0:01:20.991 ********* 2025-08-29 17:45:03.581294 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:03.581302 | orchestrator | 2025-08-29 17:45:03.581310 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 17:45:03.581318 | orchestrator | Friday 29 August 2025 17:45:01 +0000 (0:00:00.564) 0:01:21.556 ********* 2025-08-29 17:45:03.581326 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:03.581333 | orchestrator | 2025-08-29 17:45:03.581341 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 17:45:03.581349 | orchestrator | Friday 29 August 2025 17:45:02 +0000 (0:00:00.604) 0:01:22.161 ********* 2025-08-29 17:45:03.581357 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:03.581364 | orchestrator | 2025-08-29 17:45:03.581372 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 17:45:03.581380 | orchestrator | Friday 29 August 2025 17:45:02 +0000 (0:00:00.169) 0:01:22.331 ********* 2025-08-29 17:45:03.581388 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'vg_name': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'}) 2025-08-29 17:45:03.581396 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'vg_name': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'}) 2025-08-29 17:45:03.581404 | orchestrator | 2025-08-29 17:45:03.581412 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 17:45:03.581425 | orchestrator | Friday 29 August 2025 17:45:02 +0000 (0:00:00.186) 0:01:22.517 ********* 2025-08-29 17:45:03.581446 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581496 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581505 | orchestrator | 2025-08-29 17:45:03.581514 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 17:45:03.581523 | orchestrator | Friday 29 August 2025 17:45:02 +0000 (0:00:00.163) 0:01:22.681 ********* 2025-08-29 17:45:03.581532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581541 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581560 | orchestrator | 2025-08-29 17:45:03.581568 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 17:45:03.581577 | orchestrator | Friday 29 August 2025 17:45:03 +0000 (0:00:00.184) 0:01:22.866 ********* 2025-08-29 17:45:03.581586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'})  2025-08-29 17:45:03.581610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'})  2025-08-29 17:45:03.581619 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:03.581628 | orchestrator | 2025-08-29 17:45:03.581637 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 17:45:03.581646 | orchestrator | Friday 29 August 2025 17:45:03 +0000 (0:00:00.188) 0:01:23.054 ********* 2025-08-29 17:45:03.581655 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 17:45:03.581664 | orchestrator |  "lvm_report": { 2025-08-29 17:45:03.581673 | orchestrator |  "lv": [ 2025-08-29 17:45:03.581682 | orchestrator |  { 2025-08-29 17:45:03.581691 | orchestrator |  "lv_name": "osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12", 2025-08-29 17:45:03.581700 | orchestrator |  "vg_name": "ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12" 2025-08-29 17:45:03.581709 | orchestrator |  }, 2025-08-29 17:45:03.581722 | orchestrator |  { 2025-08-29 17:45:03.581731 | orchestrator |  "lv_name": "osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4", 2025-08-29 17:45:03.581740 | orchestrator |  "vg_name": "ceph-756a9a3b-59dc-526e-9851-f6b5408065e4" 2025-08-29 17:45:03.581749 | orchestrator |  } 2025-08-29 17:45:03.581757 | orchestrator |  ], 2025-08-29 17:45:03.581766 | orchestrator |  "pv": [ 2025-08-29 17:45:03.581774 | orchestrator |  { 2025-08-29 17:45:03.581783 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 17:45:03.581792 | orchestrator |  "vg_name": "ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12" 2025-08-29 17:45:03.581800 | orchestrator |  }, 2025-08-29 17:45:03.581807 | orchestrator |  { 2025-08-29 17:45:03.581815 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 17:45:03.581823 | orchestrator |  "vg_name": "ceph-756a9a3b-59dc-526e-9851-f6b5408065e4" 2025-08-29 17:45:03.581830 | orchestrator |  } 2025-08-29 17:45:03.581838 | orchestrator |  ] 2025-08-29 17:45:03.581846 | orchestrator |  } 2025-08-29 17:45:03.581853 | orchestrator | } 2025-08-29 17:45:03.581861 | orchestrator | 2025-08-29 17:45:03.581869 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:45:03.581877 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 17:45:03.581892 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 17:45:03.581899 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 17:45:03.581907 | orchestrator | 2025-08-29 17:45:03.581915 | orchestrator | 2025-08-29 17:45:03.581923 | orchestrator | 2025-08-29 17:45:03.581930 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:45:03.581938 | orchestrator | Friday 29 August 2025 17:45:03 +0000 (0:00:00.191) 0:01:23.245 ********* 2025-08-29 17:45:03.581946 | orchestrator | =============================================================================== 2025-08-29 17:45:03.581954 | orchestrator | Create block VGs -------------------------------------------------------- 6.26s 2025-08-29 17:45:03.581961 | orchestrator | Create block LVs -------------------------------------------------------- 4.49s 2025-08-29 17:45:03.581969 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.10s 2025-08-29 17:45:03.581977 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.74s 2025-08-29 17:45:03.581985 | orchestrator | Add known partitions to the list of available block devices ------------- 1.73s 2025-08-29 17:45:03.581992 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.68s 2025-08-29 17:45:03.582000 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.67s 2025-08-29 17:45:03.582008 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2025-08-29 17:45:03.582066 | orchestrator | Add known links to the list of available block devices ------------------ 1.53s 2025-08-29 17:45:04.080623 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-08-29 17:45:04.080748 | orchestrator | Print LVM report data --------------------------------------------------- 1.14s 2025-08-29 17:45:04.080764 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2025-08-29 17:45:04.080775 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.92s 2025-08-29 17:45:04.080786 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.86s 2025-08-29 17:45:04.080796 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.86s 2025-08-29 17:45:04.080807 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.83s 2025-08-29 17:45:04.080817 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.80s 2025-08-29 17:45:04.080828 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-08-29 17:45:04.080839 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.78s 2025-08-29 17:45:04.080849 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.77s 2025-08-29 17:45:16.823988 | orchestrator | 2025-08-29 17:45:16 | INFO  | Task e27d678b-a70c-4b58-8a30-5ff774d4b856 (facts) was prepared for execution. 2025-08-29 17:45:16.824094 | orchestrator | 2025-08-29 17:45:16 | INFO  | It takes a moment until task e27d678b-a70c-4b58-8a30-5ff774d4b856 (facts) has been started and output is visible here. 2025-08-29 17:45:30.521046 | orchestrator | 2025-08-29 17:45:30.521160 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 17:45:30.521174 | orchestrator | 2025-08-29 17:45:30.521185 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 17:45:30.521195 | orchestrator | Friday 29 August 2025 17:45:21 +0000 (0:00:00.353) 0:00:00.353 ********* 2025-08-29 17:45:30.521205 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:45:30.521217 | orchestrator | ok: [testbed-manager] 2025-08-29 17:45:30.521229 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:45:30.521264 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:45:30.521273 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:45:30.521283 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:45:30.521292 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:30.521302 | orchestrator | 2025-08-29 17:45:30.521312 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 17:45:30.521323 | orchestrator | Friday 29 August 2025 17:45:22 +0000 (0:00:01.394) 0:00:01.747 ********* 2025-08-29 17:45:30.521333 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:45:30.521359 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:30.521369 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:30.521379 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:30.521389 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:45:30.521398 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:45:30.521408 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:30.521418 | orchestrator | 2025-08-29 17:45:30.521428 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 17:45:30.521438 | orchestrator | 2025-08-29 17:45:30.521448 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 17:45:30.521458 | orchestrator | Friday 29 August 2025 17:45:24 +0000 (0:00:01.489) 0:00:03.237 ********* 2025-08-29 17:45:30.521527 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:45:30.521537 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:45:30.521546 | orchestrator | ok: [testbed-manager] 2025-08-29 17:45:30.521555 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:45:30.521564 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:45:30.521573 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:45:30.521582 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:45:30.521593 | orchestrator | 2025-08-29 17:45:30.521603 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 17:45:30.521614 | orchestrator | 2025-08-29 17:45:30.521625 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 17:45:30.521639 | orchestrator | Friday 29 August 2025 17:45:29 +0000 (0:00:05.050) 0:00:08.287 ********* 2025-08-29 17:45:30.521649 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:45:30.521659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:45:30.521669 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:45:30.521678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:45:30.521687 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:45:30.521696 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:45:30.521706 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:45:30.521716 | orchestrator | 2025-08-29 17:45:30.521726 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:45:30.521737 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521750 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521760 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521769 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521779 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521788 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521798 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:45:30.521807 | orchestrator | 2025-08-29 17:45:30.521816 | orchestrator | 2025-08-29 17:45:30.521838 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:45:30.521848 | orchestrator | Friday 29 August 2025 17:45:30 +0000 (0:00:00.549) 0:00:08.836 ********* 2025-08-29 17:45:30.521857 | orchestrator | =============================================================================== 2025-08-29 17:45:30.521866 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.05s 2025-08-29 17:45:30.521875 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.49s 2025-08-29 17:45:30.521883 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.39s 2025-08-29 17:45:30.521893 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-08-29 17:45:42.948059 | orchestrator | 2025-08-29 17:45:42 | INFO  | Task 5ab195b6-cbb6-4026-9113-659e17a108d7 (frr) was prepared for execution. 2025-08-29 17:45:42.948203 | orchestrator | 2025-08-29 17:45:42 | INFO  | It takes a moment until task 5ab195b6-cbb6-4026-9113-659e17a108d7 (frr) has been started and output is visible here. 2025-08-29 17:46:11.206105 | orchestrator | 2025-08-29 17:46:11.206212 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 17:46:11.206225 | orchestrator | 2025-08-29 17:46:11.206231 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 17:46:11.206236 | orchestrator | Friday 29 August 2025 17:45:47 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-08-29 17:46:11.206240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:46:11.206246 | orchestrator | 2025-08-29 17:46:11.206250 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 17:46:11.206254 | orchestrator | Friday 29 August 2025 17:45:47 +0000 (0:00:00.293) 0:00:00.551 ********* 2025-08-29 17:46:11.206258 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:11.206263 | orchestrator | 2025-08-29 17:46:11.206267 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 17:46:11.206271 | orchestrator | Friday 29 August 2025 17:45:48 +0000 (0:00:01.303) 0:00:01.855 ********* 2025-08-29 17:46:11.206274 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:11.206278 | orchestrator | 2025-08-29 17:46:11.206282 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 17:46:11.206302 | orchestrator | Friday 29 August 2025 17:45:59 +0000 (0:00:10.930) 0:00:12.785 ********* 2025-08-29 17:46:11.206309 | orchestrator | ok: [testbed-manager] 2025-08-29 17:46:11.206316 | orchestrator | 2025-08-29 17:46:11.206323 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 17:46:11.206329 | orchestrator | Friday 29 August 2025 17:46:01 +0000 (0:00:01.438) 0:00:14.223 ********* 2025-08-29 17:46:11.206335 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:11.206341 | orchestrator | 2025-08-29 17:46:11.206348 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 17:46:11.206354 | orchestrator | Friday 29 August 2025 17:46:02 +0000 (0:00:01.002) 0:00:15.226 ********* 2025-08-29 17:46:11.206361 | orchestrator | ok: [testbed-manager] 2025-08-29 17:46:11.206367 | orchestrator | 2025-08-29 17:46:11.206374 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 17:46:11.206381 | orchestrator | Friday 29 August 2025 17:46:03 +0000 (0:00:01.259) 0:00:16.485 ********* 2025-08-29 17:46:11.206388 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:46:11.206394 | orchestrator | 2025-08-29 17:46:11.206400 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 17:46:11.206407 | orchestrator | Friday 29 August 2025 17:46:04 +0000 (0:00:00.849) 0:00:17.334 ********* 2025-08-29 17:46:11.206413 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:46:11.206419 | orchestrator | 2025-08-29 17:46:11.206425 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 17:46:11.206431 | orchestrator | Friday 29 August 2025 17:46:04 +0000 (0:00:00.167) 0:00:17.502 ********* 2025-08-29 17:46:11.206455 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:11.206462 | orchestrator | 2025-08-29 17:46:11.206515 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 17:46:11.206522 | orchestrator | Friday 29 August 2025 17:46:05 +0000 (0:00:01.037) 0:00:18.540 ********* 2025-08-29 17:46:11.206528 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 17:46:11.206534 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 17:46:11.206541 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 17:46:11.206547 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 17:46:11.206553 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 17:46:11.206560 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 17:46:11.206566 | orchestrator | 2025-08-29 17:46:11.206572 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 17:46:11.206579 | orchestrator | Friday 29 August 2025 17:46:07 +0000 (0:00:02.318) 0:00:20.858 ********* 2025-08-29 17:46:11.206585 | orchestrator | ok: [testbed-manager] 2025-08-29 17:46:11.206592 | orchestrator | 2025-08-29 17:46:11.206597 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 17:46:11.206604 | orchestrator | Friday 29 August 2025 17:46:09 +0000 (0:00:01.464) 0:00:22.322 ********* 2025-08-29 17:46:11.206611 | orchestrator | changed: [testbed-manager] 2025-08-29 17:46:11.206617 | orchestrator | 2025-08-29 17:46:11.206623 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:46:11.206631 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 17:46:11.206637 | orchestrator | 2025-08-29 17:46:11.206643 | orchestrator | 2025-08-29 17:46:11.206649 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:46:11.206654 | orchestrator | Friday 29 August 2025 17:46:10 +0000 (0:00:01.488) 0:00:23.811 ********* 2025-08-29 17:46:11.206660 | orchestrator | =============================================================================== 2025-08-29 17:46:11.206667 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.93s 2025-08-29 17:46:11.206674 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.32s 2025-08-29 17:46:11.206680 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.49s 2025-08-29 17:46:11.206686 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.46s 2025-08-29 17:46:11.206708 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.44s 2025-08-29 17:46:11.206716 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.30s 2025-08-29 17:46:11.206722 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.26s 2025-08-29 17:46:11.206730 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.04s 2025-08-29 17:46:11.206736 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.00s 2025-08-29 17:46:11.206743 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.85s 2025-08-29 17:46:11.206749 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.29s 2025-08-29 17:46:11.206756 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.17s 2025-08-29 17:46:11.553765 | orchestrator | 2025-08-29 17:46:11.556658 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 17:46:11 UTC 2025 2025-08-29 17:46:11.556717 | orchestrator | 2025-08-29 17:46:13.664562 | orchestrator | 2025-08-29 17:46:13 | INFO  | Collection nutshell is prepared for execution 2025-08-29 17:46:13.664695 | orchestrator | 2025-08-29 17:46:13 | INFO  | D [0] - dotfiles 2025-08-29 17:46:23.742871 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [0] - homer 2025-08-29 17:46:23.743000 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [0] - netdata 2025-08-29 17:46:23.743011 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [0] - openstackclient 2025-08-29 17:46:23.743017 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [0] - phpmyadmin 2025-08-29 17:46:23.743031 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [0] - common 2025-08-29 17:46:23.747639 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [1] -- loadbalancer 2025-08-29 17:46:23.747673 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [2] --- opensearch 2025-08-29 17:46:23.748251 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [2] --- mariadb-ng 2025-08-29 17:46:23.748622 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [3] ---- horizon 2025-08-29 17:46:23.749901 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [3] ---- keystone 2025-08-29 17:46:23.749965 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [4] ----- neutron 2025-08-29 17:46:23.749980 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [5] ------ wait-for-nova 2025-08-29 17:46:23.750314 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [5] ------ octavia 2025-08-29 17:46:23.752264 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [4] ----- barbican 2025-08-29 17:46:23.752414 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [4] ----- designate 2025-08-29 17:46:23.752430 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [4] ----- ironic 2025-08-29 17:46:23.752442 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [4] ----- placement 2025-08-29 17:46:23.752462 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [4] ----- magnum 2025-08-29 17:46:23.753407 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [1] -- openvswitch 2025-08-29 17:46:23.753431 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [2] --- ovn 2025-08-29 17:46:23.753752 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [1] -- memcached 2025-08-29 17:46:23.753797 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [1] -- redis 2025-08-29 17:46:23.753968 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 17:46:23.754428 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [0] - kubernetes 2025-08-29 17:46:23.757078 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [1] -- kubeconfig 2025-08-29 17:46:23.757091 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 17:46:23.757373 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [0] - ceph 2025-08-29 17:46:23.759958 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [1] -- ceph-pools 2025-08-29 17:46:23.759987 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 17:46:23.760386 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [3] ---- cephclient 2025-08-29 17:46:23.760404 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 17:46:23.760415 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 17:46:23.760519 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 17:46:23.760610 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [5] ------ glance 2025-08-29 17:46:23.760621 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [5] ------ cinder 2025-08-29 17:46:23.760632 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [5] ------ nova 2025-08-29 17:46:23.761028 | orchestrator | 2025-08-29 17:46:23 | INFO  | A [4] ----- prometheus 2025-08-29 17:46:23.761325 | orchestrator | 2025-08-29 17:46:23 | INFO  | D [5] ------ grafana 2025-08-29 17:46:24.048694 | orchestrator | 2025-08-29 17:46:24 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 17:46:24.048797 | orchestrator | 2025-08-29 17:46:24 | INFO  | Tasks are running in the background 2025-08-29 17:46:27.238812 | orchestrator | 2025-08-29 17:46:27 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 17:46:29.397966 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:29.403830 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:29.403893 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:29.403904 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:29.403915 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:29.403926 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:29.404967 | orchestrator | 2025-08-29 17:46:29 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:29.405044 | orchestrator | 2025-08-29 17:46:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:32.462237 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:32.462714 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:32.469869 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:32.472979 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:32.473676 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:32.474684 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:32.475366 | orchestrator | 2025-08-29 17:46:32 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:32.475393 | orchestrator | 2025-08-29 17:46:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:35.589768 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:35.593607 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:35.594935 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:35.595907 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:35.597008 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:35.598122 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:35.599960 | orchestrator | 2025-08-29 17:46:35 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:35.599994 | orchestrator | 2025-08-29 17:46:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:38.661296 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:38.663044 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:38.667306 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:38.673278 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:38.673678 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:38.674242 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:38.675229 | orchestrator | 2025-08-29 17:46:38 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:38.675261 | orchestrator | 2025-08-29 17:46:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:41.730448 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:41.730981 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:41.731501 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:41.732110 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:41.732849 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:41.733558 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:41.739693 | orchestrator | 2025-08-29 17:46:41 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:41.739728 | orchestrator | 2025-08-29 17:46:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:44.831897 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:44.832000 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:44.832015 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:44.832026 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:44.832037 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:44.833616 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:44.833698 | orchestrator | 2025-08-29 17:46:44 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:44.833713 | orchestrator | 2025-08-29 17:46:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:48.151320 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:48.152303 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:48.152973 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:48.153630 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:48.154206 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:48.179722 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:48.181264 | orchestrator | 2025-08-29 17:46:48 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:48.183465 | orchestrator | 2025-08-29 17:46:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:51.308638 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:51.311181 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:51.317223 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:51.317275 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:51.318799 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:51.321098 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:51.322158 | orchestrator | 2025-08-29 17:46:51 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:51.322184 | orchestrator | 2025-08-29 17:46:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:54.447766 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:54.448784 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:54.453551 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:54.455353 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:54.455405 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:54.455417 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:54.458976 | orchestrator | 2025-08-29 17:46:54 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:54.459026 | orchestrator | 2025-08-29 17:46:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:46:57.604615 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:46:57.611760 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:46:57.617495 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:46:57.752895 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:46:57.770781 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:46:57.770852 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:46:57.770861 | orchestrator | 2025-08-29 17:46:57 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:46:57.770869 | orchestrator | 2025-08-29 17:46:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:00.914305 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:00.918239 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:00.921965 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state STARTED 2025-08-29 17:47:00.931112 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:00.940854 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:00.940897 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:00.940909 | orchestrator | 2025-08-29 17:47:00 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:00.940922 | orchestrator | 2025-08-29 17:47:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:04.258740 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:04.265583 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:04.267857 | orchestrator | 2025-08-29 17:47:04.267881 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 17:47:04.267890 | orchestrator | 2025-08-29 17:47:04.267897 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 17:47:04.267904 | orchestrator | Friday 29 August 2025 17:46:42 +0000 (0:00:01.258) 0:00:01.258 ********* 2025-08-29 17:47:04.267911 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:47:04.267918 | orchestrator | changed: [testbed-manager] 2025-08-29 17:47:04.267925 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:47:04.267931 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:47:04.267937 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:47:04.267944 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:47:04.267950 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:47:04.267956 | orchestrator | 2025-08-29 17:47:04.267962 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 17:47:04.267969 | orchestrator | Friday 29 August 2025 17:46:49 +0000 (0:00:06.558) 0:00:07.816 ********* 2025-08-29 17:47:04.267975 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 17:47:04.267982 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 17:47:04.267989 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 17:47:04.267995 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 17:47:04.268001 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 17:47:04.268007 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 17:47:04.268013 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 17:47:04.268019 | orchestrator | 2025-08-29 17:47:04.268026 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 17:47:04.268032 | orchestrator | Friday 29 August 2025 17:46:52 +0000 (0:00:02.906) 0:00:10.722 ********* 2025-08-29 17:47:04.268042 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:51.332987', 'end': '2025-08-29 17:46:51.341971', 'delta': '0:00:00.008984', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268082 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:51.051642', 'end': '2025-08-29 17:46:51.058982', 'delta': '0:00:00.007340', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268090 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:51.004813', 'end': '2025-08-29 17:46:51.014431', 'delta': '0:00:00.009618', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268116 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:51.172635', 'end': '2025-08-29 17:46:51.178633', 'delta': '0:00:00.005998', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268124 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:51.932605', 'end': '2025-08-29 17:46:51.941498', 'delta': '0:00:00.008893', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268131 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:52.021678', 'end': '2025-08-29 17:46:52.030260', 'delta': '0:00:00.008582', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268148 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 17:46:51.607359', 'end': '2025-08-29 17:46:51.612740', 'delta': '0:00:00.005381', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 17:47:04.268155 | orchestrator | 2025-08-29 17:47:04.268162 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 17:47:04.268168 | orchestrator | Friday 29 August 2025 17:46:55 +0000 (0:00:03.172) 0:00:13.895 ********* 2025-08-29 17:47:04.268175 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 17:47:04.268181 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 17:47:04.268188 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 17:47:04.268194 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 17:47:04.268200 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 17:47:04.268207 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 17:47:04.268213 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 17:47:04.268219 | orchestrator | 2025-08-29 17:47:04.268226 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 17:47:04.268232 | orchestrator | Friday 29 August 2025 17:46:57 +0000 (0:00:01.665) 0:00:15.560 ********* 2025-08-29 17:47:04.268239 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 17:47:04.268246 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 17:47:04.268252 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 17:47:04.268258 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 17:47:04.268264 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 17:47:04.268270 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 17:47:04.268276 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 17:47:04.268282 | orchestrator | 2025-08-29 17:47:04.268289 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:47:04.268299 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268307 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268313 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268320 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268326 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268332 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268342 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:47:04.268349 | orchestrator | 2025-08-29 17:47:04.268355 | orchestrator | 2025-08-29 17:47:04.268361 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:47:04.268367 | orchestrator | Friday 29 August 2025 17:47:01 +0000 (0:00:04.102) 0:00:19.663 ********* 2025-08-29 17:47:04.268373 | orchestrator | =============================================================================== 2025-08-29 17:47:04.268379 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.56s 2025-08-29 17:47:04.268385 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.10s 2025-08-29 17:47:04.268392 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.17s 2025-08-29 17:47:04.268398 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.91s 2025-08-29 17:47:04.268404 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.67s 2025-08-29 17:47:04.268410 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task ae6b1783-1228-4152-ab95-596d0575a516 is in state SUCCESS 2025-08-29 17:47:04.273600 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:04.273622 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:04.350888 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:04.359309 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:04.360647 | orchestrator | 2025-08-29 17:47:04 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:04.360853 | orchestrator | 2025-08-29 17:47:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:07.459640 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:07.459748 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:07.460638 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:07.467247 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:07.468975 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:07.480239 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:07.481335 | orchestrator | 2025-08-29 17:47:07 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:07.481376 | orchestrator | 2025-08-29 17:47:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:10.710416 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:10.715436 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:10.720397 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:10.733150 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:10.738145 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:10.741229 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:10.746698 | orchestrator | 2025-08-29 17:47:10 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:10.746769 | orchestrator | 2025-08-29 17:47:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:13.803456 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:13.803637 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:13.803656 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:13.803671 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:13.803684 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:13.806259 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:13.809029 | orchestrator | 2025-08-29 17:47:13 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:13.809138 | orchestrator | 2025-08-29 17:47:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:16.879013 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:16.879102 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:16.880867 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:16.885110 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:16.886812 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:16.891288 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:16.891867 | orchestrator | 2025-08-29 17:47:16 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:16.891937 | orchestrator | 2025-08-29 17:47:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:19.981063 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:19.981261 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:19.982172 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:19.983773 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:19.990436 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:19.990550 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:19.990574 | orchestrator | 2025-08-29 17:47:19 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:19.990593 | orchestrator | 2025-08-29 17:47:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:23.165908 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:23.166391 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:23.167586 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:23.168551 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:23.170915 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:23.171701 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:23.173695 | orchestrator | 2025-08-29 17:47:23 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state STARTED 2025-08-29 17:47:23.173740 | orchestrator | 2025-08-29 17:47:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:26.349073 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:26.351328 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:26.355360 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:26.357600 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:26.362708 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:26.363409 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:26.365190 | orchestrator | 2025-08-29 17:47:26 | INFO  | Task 0a58dd92-da2f-4704-bf43-c3f93aca79bd is in state SUCCESS 2025-08-29 17:47:26.365214 | orchestrator | 2025-08-29 17:47:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:30.004365 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:30.004450 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:30.004465 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:30.004528 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:30.004540 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:30.004551 | orchestrator | 2025-08-29 17:47:29 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:30.004562 | orchestrator | 2025-08-29 17:47:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:32.512584 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:32.512680 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:32.512696 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:32.512709 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:32.512721 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:32.512732 | orchestrator | 2025-08-29 17:47:32 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:32.512744 | orchestrator | 2025-08-29 17:47:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:35.578735 | orchestrator | 2025-08-29 17:47:35 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:35.579109 | orchestrator | 2025-08-29 17:47:35 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:35.580234 | orchestrator | 2025-08-29 17:47:35 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:35.581527 | orchestrator | 2025-08-29 17:47:35 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:35.583301 | orchestrator | 2025-08-29 17:47:35 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:35.585218 | orchestrator | 2025-08-29 17:47:35 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:35.585324 | orchestrator | 2025-08-29 17:47:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:38.667509 | orchestrator | 2025-08-29 17:47:38 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:38.667574 | orchestrator | 2025-08-29 17:47:38 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:38.667585 | orchestrator | 2025-08-29 17:47:38 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:38.667604 | orchestrator | 2025-08-29 17:47:38 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:38.667613 | orchestrator | 2025-08-29 17:47:38 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:38.667621 | orchestrator | 2025-08-29 17:47:38 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:38.667630 | orchestrator | 2025-08-29 17:47:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:41.734564 | orchestrator | 2025-08-29 17:47:41 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:41.734676 | orchestrator | 2025-08-29 17:47:41 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:41.734691 | orchestrator | 2025-08-29 17:47:41 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:41.734702 | orchestrator | 2025-08-29 17:47:41 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:41.734713 | orchestrator | 2025-08-29 17:47:41 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:41.734724 | orchestrator | 2025-08-29 17:47:41 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:41.734735 | orchestrator | 2025-08-29 17:47:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:44.820820 | orchestrator | 2025-08-29 17:47:44 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:44.820919 | orchestrator | 2025-08-29 17:47:44 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:44.820934 | orchestrator | 2025-08-29 17:47:44 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:44.820945 | orchestrator | 2025-08-29 17:47:44 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:44.820957 | orchestrator | 2025-08-29 17:47:44 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:44.820967 | orchestrator | 2025-08-29 17:47:44 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:44.820978 | orchestrator | 2025-08-29 17:47:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:48.261181 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:48.261268 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state STARTED 2025-08-29 17:47:48.261281 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:48.261293 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:48.261303 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:48.261314 | orchestrator | 2025-08-29 17:47:48 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:48.261325 | orchestrator | 2025-08-29 17:47:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:51.651601 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:51.651693 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task ca0d6fb8-cd76-4260-b657-e30e5fe9d007 is in state SUCCESS 2025-08-29 17:47:51.652538 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:51.653594 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:51.658224 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:51.658283 | orchestrator | 2025-08-29 17:47:51 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:51.658298 | orchestrator | 2025-08-29 17:47:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:54.728081 | orchestrator | 2025-08-29 17:47:54 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:54.729118 | orchestrator | 2025-08-29 17:47:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:54.732538 | orchestrator | 2025-08-29 17:47:54 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:54.733806 | orchestrator | 2025-08-29 17:47:54 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:54.735393 | orchestrator | 2025-08-29 17:47:54 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:54.735466 | orchestrator | 2025-08-29 17:47:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:47:57.795046 | orchestrator | 2025-08-29 17:47:57 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:47:57.795720 | orchestrator | 2025-08-29 17:47:57 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:47:57.796905 | orchestrator | 2025-08-29 17:47:57 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:47:57.797935 | orchestrator | 2025-08-29 17:47:57 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:47:57.798801 | orchestrator | 2025-08-29 17:47:57 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:47:57.798825 | orchestrator | 2025-08-29 17:47:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:00.853389 | orchestrator | 2025-08-29 17:48:00 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:48:00.855727 | orchestrator | 2025-08-29 17:48:00 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:00.859024 | orchestrator | 2025-08-29 17:48:00 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:00.862295 | orchestrator | 2025-08-29 17:48:00 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:48:00.868196 | orchestrator | 2025-08-29 17:48:00 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:00.868253 | orchestrator | 2025-08-29 17:48:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:03.919833 | orchestrator | 2025-08-29 17:48:03 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:48:03.925637 | orchestrator | 2025-08-29 17:48:03 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:03.926756 | orchestrator | 2025-08-29 17:48:03 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:03.934144 | orchestrator | 2025-08-29 17:48:03 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:48:03.934802 | orchestrator | 2025-08-29 17:48:03 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:03.934949 | orchestrator | 2025-08-29 17:48:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:07.022123 | orchestrator | 2025-08-29 17:48:07 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:48:07.026145 | orchestrator | 2025-08-29 17:48:07 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:07.029723 | orchestrator | 2025-08-29 17:48:07 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:07.033363 | orchestrator | 2025-08-29 17:48:07 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:48:07.036540 | orchestrator | 2025-08-29 17:48:07 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:07.036580 | orchestrator | 2025-08-29 17:48:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:10.137003 | orchestrator | 2025-08-29 17:48:10 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state STARTED 2025-08-29 17:48:10.142257 | orchestrator | 2025-08-29 17:48:10 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:10.143272 | orchestrator | 2025-08-29 17:48:10 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:10.145609 | orchestrator | 2025-08-29 17:48:10 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:48:10.148403 | orchestrator | 2025-08-29 17:48:10 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:10.150085 | orchestrator | 2025-08-29 17:48:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:13.285063 | orchestrator | 2025-08-29 17:48:13.285178 | orchestrator | 2025-08-29 17:48:13.285201 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 17:48:13.285219 | orchestrator | 2025-08-29 17:48:13.285236 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 17:48:13.285253 | orchestrator | Friday 29 August 2025 17:46:46 +0000 (0:00:01.039) 0:00:01.039 ********* 2025-08-29 17:48:13.285270 | orchestrator | ok: [testbed-manager] => { 2025-08-29 17:48:13.285299 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 17:48:13.285319 | orchestrator | } 2025-08-29 17:48:13.285336 | orchestrator | 2025-08-29 17:48:13.285353 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 17:48:13.285395 | orchestrator | Friday 29 August 2025 17:46:46 +0000 (0:00:00.450) 0:00:01.489 ********* 2025-08-29 17:48:13.285412 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.285429 | orchestrator | 2025-08-29 17:48:13.285446 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 17:48:13.285462 | orchestrator | Friday 29 August 2025 17:46:48 +0000 (0:00:02.413) 0:00:03.903 ********* 2025-08-29 17:48:13.285532 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 17:48:13.285550 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 17:48:13.285568 | orchestrator | 2025-08-29 17:48:13.285586 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 17:48:13.285604 | orchestrator | Friday 29 August 2025 17:46:51 +0000 (0:00:02.230) 0:00:06.134 ********* 2025-08-29 17:48:13.285621 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.285638 | orchestrator | 2025-08-29 17:48:13.285656 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 17:48:13.285673 | orchestrator | Friday 29 August 2025 17:46:54 +0000 (0:00:03.582) 0:00:09.716 ********* 2025-08-29 17:48:13.285691 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.285708 | orchestrator | 2025-08-29 17:48:13.285725 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 17:48:13.285742 | orchestrator | Friday 29 August 2025 17:46:57 +0000 (0:00:02.486) 0:00:12.203 ********* 2025-08-29 17:48:13.285759 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 17:48:13.285777 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.285795 | orchestrator | 2025-08-29 17:48:13.285812 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 17:48:13.285829 | orchestrator | Friday 29 August 2025 17:47:21 +0000 (0:00:24.187) 0:00:36.390 ********* 2025-08-29 17:48:13.285846 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.285863 | orchestrator | 2025-08-29 17:48:13.285880 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:48:13.285899 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.285916 | orchestrator | 2025-08-29 17:48:13.285933 | orchestrator | 2025-08-29 17:48:13.285949 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:48:13.285966 | orchestrator | Friday 29 August 2025 17:47:24 +0000 (0:00:03.119) 0:00:39.510 ********* 2025-08-29 17:48:13.285983 | orchestrator | =============================================================================== 2025-08-29 17:48:13.285999 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.19s 2025-08-29 17:48:13.286100 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.58s 2025-08-29 17:48:13.286122 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.12s 2025-08-29 17:48:13.286140 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.49s 2025-08-29 17:48:13.286157 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.41s 2025-08-29 17:48:13.286176 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.23s 2025-08-29 17:48:13.286193 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.45s 2025-08-29 17:48:13.286211 | orchestrator | 2025-08-29 17:48:13.286229 | orchestrator | 2025-08-29 17:48:13.286247 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 17:48:13.286264 | orchestrator | 2025-08-29 17:48:13.286282 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 17:48:13.286300 | orchestrator | Friday 29 August 2025 17:46:44 +0000 (0:00:01.793) 0:00:01.793 ********* 2025-08-29 17:48:13.286319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 17:48:13.286372 | orchestrator | 2025-08-29 17:48:13.286408 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 17:48:13.286423 | orchestrator | Friday 29 August 2025 17:46:45 +0000 (0:00:00.676) 0:00:02.470 ********* 2025-08-29 17:48:13.286437 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 17:48:13.286451 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 17:48:13.286466 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 17:48:13.286507 | orchestrator | 2025-08-29 17:48:13.286522 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 17:48:13.286537 | orchestrator | Friday 29 August 2025 17:46:47 +0000 (0:00:02.097) 0:00:04.569 ********* 2025-08-29 17:48:13.286551 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.286566 | orchestrator | 2025-08-29 17:48:13.286581 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 17:48:13.286598 | orchestrator | Friday 29 August 2025 17:46:51 +0000 (0:00:04.110) 0:00:08.679 ********* 2025-08-29 17:48:13.286639 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 17:48:13.286658 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.286673 | orchestrator | 2025-08-29 17:48:13.286690 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 17:48:13.286707 | orchestrator | Friday 29 August 2025 17:47:32 +0000 (0:00:41.324) 0:00:50.004 ********* 2025-08-29 17:48:13.286724 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.286744 | orchestrator | 2025-08-29 17:48:13.286762 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 17:48:13.286789 | orchestrator | Friday 29 August 2025 17:47:36 +0000 (0:00:03.878) 0:00:53.883 ********* 2025-08-29 17:48:13.286805 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.286822 | orchestrator | 2025-08-29 17:48:13.286839 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 17:48:13.286854 | orchestrator | Friday 29 August 2025 17:47:37 +0000 (0:00:01.044) 0:00:54.928 ********* 2025-08-29 17:48:13.286870 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.286885 | orchestrator | 2025-08-29 17:48:13.286901 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 17:48:13.286918 | orchestrator | Friday 29 August 2025 17:47:43 +0000 (0:00:05.232) 0:01:00.161 ********* 2025-08-29 17:48:13.286934 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.286950 | orchestrator | 2025-08-29 17:48:13.286966 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 17:48:13.286982 | orchestrator | Friday 29 August 2025 17:47:44 +0000 (0:00:01.791) 0:01:01.952 ********* 2025-08-29 17:48:13.286998 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.287014 | orchestrator | 2025-08-29 17:48:13.287025 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 17:48:13.287034 | orchestrator | Friday 29 August 2025 17:47:45 +0000 (0:00:01.061) 0:01:03.014 ********* 2025-08-29 17:48:13.287044 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.287053 | orchestrator | 2025-08-29 17:48:13.287063 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:48:13.287072 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.287082 | orchestrator | 2025-08-29 17:48:13.287092 | orchestrator | 2025-08-29 17:48:13.287101 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:48:13.287110 | orchestrator | Friday 29 August 2025 17:47:46 +0000 (0:00:00.602) 0:01:03.616 ********* 2025-08-29 17:48:13.287120 | orchestrator | =============================================================================== 2025-08-29 17:48:13.287130 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 41.32s 2025-08-29 17:48:13.287146 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 5.23s 2025-08-29 17:48:13.287177 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 4.11s 2025-08-29 17:48:13.287194 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.88s 2025-08-29 17:48:13.287209 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.10s 2025-08-29 17:48:13.287227 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.79s 2025-08-29 17:48:13.287243 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.06s 2025-08-29 17:48:13.287259 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.04s 2025-08-29 17:48:13.287270 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.68s 2025-08-29 17:48:13.287279 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.60s 2025-08-29 17:48:13.287289 | orchestrator | 2025-08-29 17:48:13.287298 | orchestrator | 2025-08-29 17:48:13.287308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:48:13.287317 | orchestrator | 2025-08-29 17:48:13.287326 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:48:13.287336 | orchestrator | Friday 29 August 2025 17:46:39 +0000 (0:00:00.729) 0:00:00.729 ********* 2025-08-29 17:48:13.287345 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 17:48:13.287355 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 17:48:13.287364 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 17:48:13.287376 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 17:48:13.287393 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 17:48:13.287409 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 17:48:13.287425 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 17:48:13.287442 | orchestrator | 2025-08-29 17:48:13.287458 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 17:48:13.287533 | orchestrator | 2025-08-29 17:48:13.287552 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 17:48:13.287570 | orchestrator | Friday 29 August 2025 17:46:44 +0000 (0:00:05.453) 0:00:06.182 ********* 2025-08-29 17:48:13.287603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:48:13.287633 | orchestrator | 2025-08-29 17:48:13.287650 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 17:48:13.287666 | orchestrator | Friday 29 August 2025 17:46:48 +0000 (0:00:03.130) 0:00:09.312 ********* 2025-08-29 17:48:13.287682 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:48:13.287698 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:48:13.287714 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.287732 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:48:13.287747 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:48:13.287774 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:48:13.287789 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:48:13.287802 | orchestrator | 2025-08-29 17:48:13.287815 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 17:48:13.287829 | orchestrator | Friday 29 August 2025 17:46:51 +0000 (0:00:03.489) 0:00:12.802 ********* 2025-08-29 17:48:13.287843 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.287851 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:48:13.287859 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:48:13.287866 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:48:13.287874 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:48:13.287882 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:48:13.287890 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:48:13.287897 | orchestrator | 2025-08-29 17:48:13.287905 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 17:48:13.287923 | orchestrator | Friday 29 August 2025 17:46:58 +0000 (0:00:06.901) 0:00:19.704 ********* 2025-08-29 17:48:13.287931 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:48:13.287939 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.287947 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:48:13.287955 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:48:13.287963 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:48:13.287970 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:48:13.287978 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:48:13.287985 | orchestrator | 2025-08-29 17:48:13.287993 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 17:48:13.288001 | orchestrator | Friday 29 August 2025 17:47:01 +0000 (0:00:03.564) 0:00:23.268 ********* 2025-08-29 17:48:13.288008 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:48:13.288016 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:48:13.288024 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:48:13.288031 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:48:13.288039 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:48:13.288047 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:48:13.288054 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.288062 | orchestrator | 2025-08-29 17:48:13.288069 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 17:48:13.288077 | orchestrator | Friday 29 August 2025 17:47:17 +0000 (0:00:15.599) 0:00:38.868 ********* 2025-08-29 17:48:13.288085 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:48:13.288095 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:48:13.288109 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:48:13.288122 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:48:13.288135 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:48:13.288149 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:48:13.288161 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.288175 | orchestrator | 2025-08-29 17:48:13.288190 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 17:48:13.288205 | orchestrator | Friday 29 August 2025 17:47:45 +0000 (0:00:27.772) 0:01:06.640 ********* 2025-08-29 17:48:13.288262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:48:13.288279 | orchestrator | 2025-08-29 17:48:13.288288 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 17:48:13.288296 | orchestrator | Friday 29 August 2025 17:47:47 +0000 (0:00:01.892) 0:01:08.533 ********* 2025-08-29 17:48:13.288303 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 17:48:13.288332 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 17:48:13.288341 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 17:48:13.288348 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 17:48:13.288356 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 17:48:13.288364 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 17:48:13.288371 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 17:48:13.288379 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 17:48:13.288387 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 17:48:13.288395 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 17:48:13.288402 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 17:48:13.288410 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 17:48:13.288418 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 17:48:13.288425 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 17:48:13.288433 | orchestrator | 2025-08-29 17:48:13.288448 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 17:48:13.288456 | orchestrator | Friday 29 August 2025 17:47:55 +0000 (0:00:07.861) 0:01:16.394 ********* 2025-08-29 17:48:13.288464 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.288531 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:48:13.288540 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:48:13.288550 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:48:13.288565 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:48:13.288579 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:48:13.288593 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:48:13.288607 | orchestrator | 2025-08-29 17:48:13.288621 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 17:48:13.288636 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:01.332) 0:01:17.727 ********* 2025-08-29 17:48:13.288653 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.288669 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:48:13.288684 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:48:13.288699 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:48:13.288713 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:48:13.288728 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:48:13.288743 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:48:13.288753 | orchestrator | 2025-08-29 17:48:13.288761 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 17:48:13.288780 | orchestrator | Friday 29 August 2025 17:47:58 +0000 (0:00:01.766) 0:01:19.493 ********* 2025-08-29 17:48:13.288788 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:48:13.288796 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.288803 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:48:13.288811 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:48:13.288819 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:48:13.288826 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:48:13.288834 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:48:13.288841 | orchestrator | 2025-08-29 17:48:13.288849 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 17:48:13.288857 | orchestrator | Friday 29 August 2025 17:47:59 +0000 (0:00:01.710) 0:01:21.203 ********* 2025-08-29 17:48:13.288875 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:48:13.288883 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:48:13.288890 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:48:13.288898 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:48:13.288906 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:48:13.288913 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:48:13.288921 | orchestrator | ok: [testbed-manager] 2025-08-29 17:48:13.288929 | orchestrator | 2025-08-29 17:48:13.288943 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 17:48:13.288956 | orchestrator | Friday 29 August 2025 17:48:02 +0000 (0:00:02.863) 0:01:24.067 ********* 2025-08-29 17:48:13.288969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 17:48:13.288985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:48:13.288998 | orchestrator | 2025-08-29 17:48:13.289012 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 17:48:13.289026 | orchestrator | Friday 29 August 2025 17:48:04 +0000 (0:00:01.779) 0:01:25.846 ********* 2025-08-29 17:48:13.289041 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.289056 | orchestrator | 2025-08-29 17:48:13.289071 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 17:48:13.289085 | orchestrator | Friday 29 August 2025 17:48:07 +0000 (0:00:03.052) 0:01:28.899 ********* 2025-08-29 17:48:13.289097 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:48:13.289112 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:48:13.289137 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:48:13.289148 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:48:13.289156 | orchestrator | changed: [testbed-manager] 2025-08-29 17:48:13.289164 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:48:13.289172 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:48:13.289179 | orchestrator | 2025-08-29 17:48:13.289187 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:48:13.289195 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289203 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289211 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289219 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289227 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289235 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289243 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:48:13.289250 | orchestrator | 2025-08-29 17:48:13.289258 | orchestrator | 2025-08-29 17:48:13.289266 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:48:13.289274 | orchestrator | Friday 29 August 2025 17:48:11 +0000 (0:00:03.758) 0:01:32.661 ********* 2025-08-29 17:48:13.289282 | orchestrator | =============================================================================== 2025-08-29 17:48:13.289290 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 27.77s 2025-08-29 17:48:13.289297 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.60s 2025-08-29 17:48:13.289305 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.86s 2025-08-29 17:48:13.289313 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 6.90s 2025-08-29 17:48:13.289320 | orchestrator | Group hosts based on enabled services ----------------------------------- 5.45s 2025-08-29 17:48:13.289328 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.76s 2025-08-29 17:48:13.289336 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.56s 2025-08-29 17:48:13.289343 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.49s 2025-08-29 17:48:13.289351 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.13s 2025-08-29 17:48:13.289358 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.05s 2025-08-29 17:48:13.289366 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.86s 2025-08-29 17:48:13.289381 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.89s 2025-08-29 17:48:13.289389 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.78s 2025-08-29 17:48:13.289397 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.77s 2025-08-29 17:48:13.289404 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.71s 2025-08-29 17:48:13.289412 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.33s 2025-08-29 17:48:13.289424 | orchestrator | 2025-08-29 17:48:13 | INFO  | Task d63df790-b9ae-4ab1-8ea5-1ab3066721d4 is in state SUCCESS 2025-08-29 17:48:13.289433 | orchestrator | 2025-08-29 17:48:13 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:13.289445 | orchestrator | 2025-08-29 17:48:13 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:13.289453 | orchestrator | 2025-08-29 17:48:13 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:48:13.289461 | orchestrator | 2025-08-29 17:48:13 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:13.289489 | orchestrator | 2025-08-29 17:48:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:16.349803 | orchestrator | 2025-08-29 17:48:16 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:16.350328 | orchestrator | 2025-08-29 17:48:16 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:16.351339 | orchestrator | 2025-08-29 17:48:16 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state STARTED 2025-08-29 17:48:16.352802 | orchestrator | 2025-08-29 17:48:16 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:16.352834 | orchestrator | 2025-08-29 17:48:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:19.463785 | orchestrator | 2025-08-29 17:48:19 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:19.464022 | orchestrator | 2025-08-29 17:48:19 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:19.464306 | orchestrator | 2025-08-29 17:48:19 | INFO  | Task 39e96ddb-43e9-47b9-bf63-87ea7ea51949 is in state SUCCESS 2025-08-29 17:48:19.466498 | orchestrator | 2025-08-29 17:48:19 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:19.466533 | orchestrator | 2025-08-29 17:48:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:22.527752 | orchestrator | 2025-08-29 17:48:22 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:22.529142 | orchestrator | 2025-08-29 17:48:22 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:22.531100 | orchestrator | 2025-08-29 17:48:22 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:22.531168 | orchestrator | 2025-08-29 17:48:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:25.573230 | orchestrator | 2025-08-29 17:48:25 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:25.575852 | orchestrator | 2025-08-29 17:48:25 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:25.578873 | orchestrator | 2025-08-29 17:48:25 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:25.578920 | orchestrator | 2025-08-29 17:48:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:28.615935 | orchestrator | 2025-08-29 17:48:28 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:28.618004 | orchestrator | 2025-08-29 17:48:28 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:28.620285 | orchestrator | 2025-08-29 17:48:28 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:28.620300 | orchestrator | 2025-08-29 17:48:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:31.667958 | orchestrator | 2025-08-29 17:48:31 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:31.668675 | orchestrator | 2025-08-29 17:48:31 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:31.670155 | orchestrator | 2025-08-29 17:48:31 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:31.670191 | orchestrator | 2025-08-29 17:48:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:34.732333 | orchestrator | 2025-08-29 17:48:34 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:34.736529 | orchestrator | 2025-08-29 17:48:34 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:34.739411 | orchestrator | 2025-08-29 17:48:34 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:34.739458 | orchestrator | 2025-08-29 17:48:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:37.779130 | orchestrator | 2025-08-29 17:48:37 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:37.780854 | orchestrator | 2025-08-29 17:48:37 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:37.781800 | orchestrator | 2025-08-29 17:48:37 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:37.782074 | orchestrator | 2025-08-29 17:48:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:40.835532 | orchestrator | 2025-08-29 17:48:40 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:40.838751 | orchestrator | 2025-08-29 17:48:40 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:40.838785 | orchestrator | 2025-08-29 17:48:40 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:40.838795 | orchestrator | 2025-08-29 17:48:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:43.880549 | orchestrator | 2025-08-29 17:48:43 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:43.881308 | orchestrator | 2025-08-29 17:48:43 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:43.881853 | orchestrator | 2025-08-29 17:48:43 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:43.882882 | orchestrator | 2025-08-29 17:48:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:46.930236 | orchestrator | 2025-08-29 17:48:46 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:46.930353 | orchestrator | 2025-08-29 17:48:46 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:46.931318 | orchestrator | 2025-08-29 17:48:46 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:46.931364 | orchestrator | 2025-08-29 17:48:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:49.991683 | orchestrator | 2025-08-29 17:48:49 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:49.994995 | orchestrator | 2025-08-29 17:48:49 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:49.999003 | orchestrator | 2025-08-29 17:48:49 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:49.999560 | orchestrator | 2025-08-29 17:48:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:53.056240 | orchestrator | 2025-08-29 17:48:53 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:53.058865 | orchestrator | 2025-08-29 17:48:53 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:53.061133 | orchestrator | 2025-08-29 17:48:53 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:53.061205 | orchestrator | 2025-08-29 17:48:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:56.119258 | orchestrator | 2025-08-29 17:48:56 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:56.121208 | orchestrator | 2025-08-29 17:48:56 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:56.125608 | orchestrator | 2025-08-29 17:48:56 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:56.125644 | orchestrator | 2025-08-29 17:48:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:48:59.166851 | orchestrator | 2025-08-29 17:48:59 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:48:59.168621 | orchestrator | 2025-08-29 17:48:59 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:48:59.172727 | orchestrator | 2025-08-29 17:48:59 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:48:59.172787 | orchestrator | 2025-08-29 17:48:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:02.225739 | orchestrator | 2025-08-29 17:49:02 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:02.228283 | orchestrator | 2025-08-29 17:49:02 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:02.230308 | orchestrator | 2025-08-29 17:49:02 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:02.231333 | orchestrator | 2025-08-29 17:49:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:05.286222 | orchestrator | 2025-08-29 17:49:05 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:05.288301 | orchestrator | 2025-08-29 17:49:05 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:05.290359 | orchestrator | 2025-08-29 17:49:05 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:05.290392 | orchestrator | 2025-08-29 17:49:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:08.334582 | orchestrator | 2025-08-29 17:49:08 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:08.336811 | orchestrator | 2025-08-29 17:49:08 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:08.338455 | orchestrator | 2025-08-29 17:49:08 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:08.338530 | orchestrator | 2025-08-29 17:49:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:11.385086 | orchestrator | 2025-08-29 17:49:11 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:11.386167 | orchestrator | 2025-08-29 17:49:11 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:11.387507 | orchestrator | 2025-08-29 17:49:11 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:11.387565 | orchestrator | 2025-08-29 17:49:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:14.428506 | orchestrator | 2025-08-29 17:49:14 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:14.430012 | orchestrator | 2025-08-29 17:49:14 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:14.431290 | orchestrator | 2025-08-29 17:49:14 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:14.431336 | orchestrator | 2025-08-29 17:49:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:17.468989 | orchestrator | 2025-08-29 17:49:17 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:17.469140 | orchestrator | 2025-08-29 17:49:17 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:17.470171 | orchestrator | 2025-08-29 17:49:17 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:17.470417 | orchestrator | 2025-08-29 17:49:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:20.515672 | orchestrator | 2025-08-29 17:49:20 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:20.519851 | orchestrator | 2025-08-29 17:49:20 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:20.520286 | orchestrator | 2025-08-29 17:49:20 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:20.520516 | orchestrator | 2025-08-29 17:49:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:23.562835 | orchestrator | 2025-08-29 17:49:23 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:23.563909 | orchestrator | 2025-08-29 17:49:23 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:23.565389 | orchestrator | 2025-08-29 17:49:23 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:23.565420 | orchestrator | 2025-08-29 17:49:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:26.617126 | orchestrator | 2025-08-29 17:49:26 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:26.619022 | orchestrator | 2025-08-29 17:49:26 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:26.622974 | orchestrator | 2025-08-29 17:49:26 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:26.623356 | orchestrator | 2025-08-29 17:49:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:29.672870 | orchestrator | 2025-08-29 17:49:29 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:29.675204 | orchestrator | 2025-08-29 17:49:29 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:29.677073 | orchestrator | 2025-08-29 17:49:29 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:29.677325 | orchestrator | 2025-08-29 17:49:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:32.729339 | orchestrator | 2025-08-29 17:49:32 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:32.729798 | orchestrator | 2025-08-29 17:49:32 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:32.731093 | orchestrator | 2025-08-29 17:49:32 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:32.731126 | orchestrator | 2025-08-29 17:49:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:35.825395 | orchestrator | 2025-08-29 17:49:35 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:35.825526 | orchestrator | 2025-08-29 17:49:35 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:35.825542 | orchestrator | 2025-08-29 17:49:35 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:35.825555 | orchestrator | 2025-08-29 17:49:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:38.849178 | orchestrator | 2025-08-29 17:49:38 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:38.854135 | orchestrator | 2025-08-29 17:49:38 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:38.855234 | orchestrator | 2025-08-29 17:49:38 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:38.856355 | orchestrator | 2025-08-29 17:49:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:41.902966 | orchestrator | 2025-08-29 17:49:41 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:41.904366 | orchestrator | 2025-08-29 17:49:41 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:41.908076 | orchestrator | 2025-08-29 17:49:41 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:41.908108 | orchestrator | 2025-08-29 17:49:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:44.952741 | orchestrator | 2025-08-29 17:49:44 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:44.962233 | orchestrator | 2025-08-29 17:49:44 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state STARTED 2025-08-29 17:49:44.962295 | orchestrator | 2025-08-29 17:49:44 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:44.962308 | orchestrator | 2025-08-29 17:49:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:48.269714 | orchestrator | 2025-08-29 17:49:48.269801 | orchestrator | 2025-08-29 17:49:48.269817 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 17:49:48.269829 | orchestrator | 2025-08-29 17:49:48.269840 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 17:49:48.269852 | orchestrator | Friday 29 August 2025 17:47:10 +0000 (0:00:00.393) 0:00:00.394 ********* 2025-08-29 17:49:48.269863 | orchestrator | ok: [testbed-manager] 2025-08-29 17:49:48.269875 | orchestrator | 2025-08-29 17:49:48.269886 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 17:49:48.269896 | orchestrator | Friday 29 August 2025 17:47:11 +0000 (0:00:01.696) 0:00:02.090 ********* 2025-08-29 17:49:48.269907 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 17:49:48.269918 | orchestrator | 2025-08-29 17:49:48.269929 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 17:49:48.269939 | orchestrator | Friday 29 August 2025 17:47:12 +0000 (0:00:00.700) 0:00:02.790 ********* 2025-08-29 17:49:48.269950 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.269961 | orchestrator | 2025-08-29 17:49:48.269971 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 17:49:48.269982 | orchestrator | Friday 29 August 2025 17:47:14 +0000 (0:00:01.628) 0:00:04.419 ********* 2025-08-29 17:49:48.269993 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 17:49:48.270004 | orchestrator | ok: [testbed-manager] 2025-08-29 17:49:48.270111 | orchestrator | 2025-08-29 17:49:48.270129 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 17:49:48.270140 | orchestrator | Friday 29 August 2025 17:48:12 +0000 (0:00:58.467) 0:01:02.887 ********* 2025-08-29 17:49:48.270152 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.270162 | orchestrator | 2025-08-29 17:49:48.270173 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:49:48.270184 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:49:48.270195 | orchestrator | 2025-08-29 17:49:48.270206 | orchestrator | 2025-08-29 17:49:48.270217 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:49:48.270247 | orchestrator | Friday 29 August 2025 17:48:17 +0000 (0:00:04.437) 0:01:07.325 ********* 2025-08-29 17:49:48.270258 | orchestrator | =============================================================================== 2025-08-29 17:49:48.270271 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.47s 2025-08-29 17:49:48.270290 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.44s 2025-08-29 17:49:48.270303 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.70s 2025-08-29 17:49:48.270315 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.63s 2025-08-29 17:49:48.270327 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.70s 2025-08-29 17:49:48.270339 | orchestrator | 2025-08-29 17:49:48.270351 | orchestrator | 2025-08-29 17:49:48.270363 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 17:49:48.270374 | orchestrator | 2025-08-29 17:49:48.270386 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 17:49:48.270398 | orchestrator | Friday 29 August 2025 17:46:29 +0000 (0:00:00.415) 0:00:00.415 ********* 2025-08-29 17:49:48.270410 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:49:48.270423 | orchestrator | 2025-08-29 17:49:48.270435 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 17:49:48.270447 | orchestrator | Friday 29 August 2025 17:46:31 +0000 (0:00:01.778) 0:00:02.194 ********* 2025-08-29 17:49:48.270483 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270496 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270508 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270520 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270532 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270544 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270558 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270570 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270582 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270594 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270606 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270618 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270630 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 17:49:48.270643 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270654 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270665 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270700 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 17:49:48.270711 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270722 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270733 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270744 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 17:49:48.270763 | orchestrator | 2025-08-29 17:49:48.270774 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 17:49:48.270785 | orchestrator | Friday 29 August 2025 17:46:37 +0000 (0:00:06.659) 0:00:08.854 ********* 2025-08-29 17:49:48.270796 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:49:48.270807 | orchestrator | 2025-08-29 17:49:48.270818 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 17:49:48.270828 | orchestrator | Friday 29 August 2025 17:46:39 +0000 (0:00:01.754) 0:00:10.609 ********* 2025-08-29 17:49:48.270843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.270865 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.270877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.270889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.270900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.270934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.270983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.270997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271013 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.271037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271135 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.271213 | orchestrator | 2025-08-29 17:49:48.271225 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 17:49:48.271242 | orchestrator | Friday 29 August 2025 17:46:46 +0000 (0:00:06.627) 0:00:17.236 ********* 2025-08-29 17:49:48.271254 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271266 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271278 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271348 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:49:48.271366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271389 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:48.271400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:48.271449 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:48.271479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271526 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:49:48.271537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271587 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:49:48.271598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:49:48.271642 | orchestrator | 2025-08-29 17:49:48.271653 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 17:49:48.271664 | orchestrator | Friday 29 August 2025 17:46:49 +0000 (0:00:03.428) 0:00:20.665 ********* 2025-08-29 17:49:48.271675 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271705 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271762 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:49:48.271773 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:48.271784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:48.271870 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:48.271881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.271945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.271968 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:49:48.271979 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:49:48.271991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 17:49:48.272006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.272024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.272035 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:49:48.272046 | orchestrator | 2025-08-29 17:49:48.272057 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 17:49:48.272068 | orchestrator | Friday 29 August 2025 17:46:54 +0000 (0:00:04.645) 0:00:25.310 ********* 2025-08-29 17:49:48.272079 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:49:48.272089 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:48.272100 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:48.272111 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:48.272121 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:49:48.272132 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:49:48.272142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:49:48.272153 | orchestrator | 2025-08-29 17:49:48.272164 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 17:49:48.272174 | orchestrator | Friday 29 August 2025 17:46:56 +0000 (0:00:01.739) 0:00:27.050 ********* 2025-08-29 17:49:48.272193 | orchestrator | skipping: [testbed-manager] 2025-08-29 17:49:48.272214 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:49:48.272234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:49:48.272254 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:49:48.272274 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:49:48.272295 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:49:48.272316 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:49:48.272337 | orchestrator | 2025-08-29 17:49:48.272358 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 17:49:48.272370 | orchestrator | Friday 29 August 2025 17:46:57 +0000 (0:00:01.766) 0:00:28.816 ********* 2025-08-29 17:49:48.272397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272443 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272535 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272598 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272621 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.272667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.272745 | orchestrator | 2025-08-29 17:49:48.272756 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 17:49:48.272767 | orchestrator | Friday 29 August 2025 17:47:08 +0000 (0:00:10.966) 0:00:39.784 ********* 2025-08-29 17:49:48.272778 | orchestrator | [WARNING]: Skipped 2025-08-29 17:49:48.272789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 17:49:48.272800 | orchestrator | to this access issue: 2025-08-29 17:49:48.272810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 17:49:48.272821 | orchestrator | directory 2025-08-29 17:49:48.272832 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:49:48.272843 | orchestrator | 2025-08-29 17:49:48.272853 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 17:49:48.272864 | orchestrator | Friday 29 August 2025 17:47:11 +0000 (0:00:02.333) 0:00:42.118 ********* 2025-08-29 17:49:48.272874 | orchestrator | [WARNING]: Skipped 2025-08-29 17:49:48.272885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 17:49:48.272896 | orchestrator | to this access issue: 2025-08-29 17:49:48.272907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 17:49:48.272917 | orchestrator | directory 2025-08-29 17:49:48.272928 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:49:48.272939 | orchestrator | 2025-08-29 17:49:48.272949 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 17:49:48.272960 | orchestrator | Friday 29 August 2025 17:47:12 +0000 (0:00:01.116) 0:00:43.235 ********* 2025-08-29 17:49:48.272970 | orchestrator | [WARNING]: Skipped 2025-08-29 17:49:48.272981 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 17:49:48.272992 | orchestrator | to this access issue: 2025-08-29 17:49:48.273003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 17:49:48.273019 | orchestrator | directory 2025-08-29 17:49:48.273030 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:49:48.273041 | orchestrator | 2025-08-29 17:49:48.273057 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 17:49:48.273068 | orchestrator | Friday 29 August 2025 17:47:13 +0000 (0:00:01.389) 0:00:44.624 ********* 2025-08-29 17:49:48.273079 | orchestrator | [WARNING]: Skipped 2025-08-29 17:49:48.273090 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 17:49:48.273101 | orchestrator | to this access issue: 2025-08-29 17:49:48.273112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 17:49:48.273122 | orchestrator | directory 2025-08-29 17:49:48.273133 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 17:49:48.273144 | orchestrator | 2025-08-29 17:49:48.273154 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 17:49:48.273165 | orchestrator | Friday 29 August 2025 17:47:15 +0000 (0:00:01.722) 0:00:46.347 ********* 2025-08-29 17:49:48.273176 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.273187 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.273198 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.273208 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.273219 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.273229 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.273240 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.273251 | orchestrator | 2025-08-29 17:49:48.273262 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 17:49:48.273288 | orchestrator | Friday 29 August 2025 17:47:23 +0000 (0:00:07.876) 0:00:54.224 ********* 2025-08-29 17:49:48.273299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273310 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273321 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273332 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273343 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273353 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273364 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 17:49:48.273374 | orchestrator | 2025-08-29 17:49:48.273389 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 17:49:48.273400 | orchestrator | Friday 29 August 2025 17:47:28 +0000 (0:00:05.376) 0:00:59.601 ********* 2025-08-29 17:49:48.273411 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.273421 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.273432 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.273443 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.273469 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.273481 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.273491 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.273502 | orchestrator | 2025-08-29 17:49:48.273512 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 17:49:48.273523 | orchestrator | Friday 29 August 2025 17:47:32 +0000 (0:00:04.010) 0:01:03.612 ********* 2025-08-29 17:49:48.273534 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273568 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273610 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273637 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273654 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273689 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273700 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273723 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273738 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273749 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273767 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273778 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.273798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:49:48.273810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273821 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.273832 | orchestrator | 2025-08-29 17:49:48.273843 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 17:49:48.273854 | orchestrator | Friday 29 August 2025 17:47:36 +0000 (0:00:04.098) 0:01:07.710 ********* 2025-08-29 17:49:48.273865 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273875 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273886 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273897 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273907 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273924 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273935 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 17:49:48.273945 | orchestrator | 2025-08-29 17:49:48.273956 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 17:49:48.273967 | orchestrator | Friday 29 August 2025 17:47:41 +0000 (0:00:04.634) 0:01:12.344 ********* 2025-08-29 17:49:48.273978 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.273989 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.273999 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.274010 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.274063 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.274075 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.274086 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 17:49:48.274096 | orchestrator | 2025-08-29 17:49:48.274107 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 17:49:48.274118 | orchestrator | Friday 29 August 2025 17:47:45 +0000 (0:00:04.499) 0:01:16.843 ********* 2025-08-29 17:49:48.274129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274141 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274546 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 17:49:48.274628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274650 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:49:48.274739 | orchestrator | 2025-08-29 17:49:48.274752 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 17:49:48.274763 | orchestrator | Friday 29 August 2025 17:47:53 +0000 (0:00:07.236) 0:01:24.080 ********* 2025-08-29 17:49:48.274774 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.274786 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.274796 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.274807 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.274818 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.274828 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.274838 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.274849 | orchestrator | 2025-08-29 17:49:48.274860 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 17:49:48.274871 | orchestrator | Friday 29 August 2025 17:47:55 +0000 (0:00:01.932) 0:01:26.012 ********* 2025-08-29 17:49:48.274881 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.274892 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.274902 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.274913 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.274923 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.274934 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.274945 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.274956 | orchestrator | 2025-08-29 17:49:48.274967 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.274978 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:01.335) 0:01:27.348 ********* 2025-08-29 17:49:48.274988 | orchestrator | 2025-08-29 17:49:48.274999 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.275009 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:00.109) 0:01:27.457 ********* 2025-08-29 17:49:48.275020 | orchestrator | 2025-08-29 17:49:48.275031 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.275042 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:00.082) 0:01:27.539 ********* 2025-08-29 17:49:48.275052 | orchestrator | 2025-08-29 17:49:48.275063 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.275073 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:00.261) 0:01:27.801 ********* 2025-08-29 17:49:48.275084 | orchestrator | 2025-08-29 17:49:48.275094 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.275105 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:00.085) 0:01:27.887 ********* 2025-08-29 17:49:48.275115 | orchestrator | 2025-08-29 17:49:48.275126 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.275136 | orchestrator | Friday 29 August 2025 17:47:57 +0000 (0:00:00.068) 0:01:27.956 ********* 2025-08-29 17:49:48.275147 | orchestrator | 2025-08-29 17:49:48.275158 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 17:49:48.275175 | orchestrator | Friday 29 August 2025 17:47:57 +0000 (0:00:00.116) 0:01:28.072 ********* 2025-08-29 17:49:48.275186 | orchestrator | 2025-08-29 17:49:48.275197 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 17:49:48.275207 | orchestrator | Friday 29 August 2025 17:47:57 +0000 (0:00:00.113) 0:01:28.186 ********* 2025-08-29 17:49:48.275224 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.275236 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.275246 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.275260 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.275278 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.275296 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.275315 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.275333 | orchestrator | 2025-08-29 17:49:48.275351 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 17:49:48.275371 | orchestrator | Friday 29 August 2025 17:48:40 +0000 (0:00:43.227) 0:02:11.413 ********* 2025-08-29 17:49:48.275389 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.275409 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.275428 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.275447 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.275506 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.275524 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.275535 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.275546 | orchestrator | 2025-08-29 17:49:48.275556 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 17:49:48.275567 | orchestrator | Friday 29 August 2025 17:49:31 +0000 (0:00:51.366) 0:03:02.780 ********* 2025-08-29 17:49:48.275578 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:49:48.275589 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:49:48.275600 | orchestrator | ok: [testbed-manager] 2025-08-29 17:49:48.275610 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:49:48.275621 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:49:48.275631 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:49:48.275642 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:49:48.275652 | orchestrator | 2025-08-29 17:49:48.275663 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 17:49:48.275674 | orchestrator | Friday 29 August 2025 17:49:34 +0000 (0:00:02.250) 0:03:05.030 ********* 2025-08-29 17:49:48.275685 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:49:48.275695 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:49:48.275706 | orchestrator | changed: [testbed-manager] 2025-08-29 17:49:48.275716 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:49:48.275727 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:49:48.275737 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:49:48.275747 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:49:48.275758 | orchestrator | 2025-08-29 17:49:48.275769 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:49:48.275780 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275798 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275809 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275820 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275831 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275850 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275861 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 17:49:48.275872 | orchestrator | 2025-08-29 17:49:48.275882 | orchestrator | 2025-08-29 17:49:48.275893 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:49:48.275904 | orchestrator | Friday 29 August 2025 17:49:45 +0000 (0:00:11.012) 0:03:16.043 ********* 2025-08-29 17:49:48.275915 | orchestrator | =============================================================================== 2025-08-29 17:49:48.275925 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 51.37s 2025-08-29 17:49:48.275936 | orchestrator | common : Restart fluentd container ------------------------------------- 43.23s 2025-08-29 17:49:48.275946 | orchestrator | common : Restart cron container ---------------------------------------- 11.01s 2025-08-29 17:49:48.275957 | orchestrator | common : Copying over config.json files for services ------------------- 10.97s 2025-08-29 17:49:48.275967 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 7.88s 2025-08-29 17:49:48.275978 | orchestrator | common : Check common containers ---------------------------------------- 7.24s 2025-08-29 17:49:48.275988 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.66s 2025-08-29 17:49:48.275999 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.63s 2025-08-29 17:49:48.276010 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.38s 2025-08-29 17:49:48.276020 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.67s 2025-08-29 17:49:48.276031 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.63s 2025-08-29 17:49:48.276042 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.50s 2025-08-29 17:49:48.276052 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.10s 2025-08-29 17:49:48.276063 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.01s 2025-08-29 17:49:48.276082 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.43s 2025-08-29 17:49:48.276093 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.33s 2025-08-29 17:49:48.276104 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.25s 2025-08-29 17:49:48.276114 | orchestrator | common : Creating log volume -------------------------------------------- 1.93s 2025-08-29 17:49:48.276125 | orchestrator | common : include_tasks -------------------------------------------------- 1.78s 2025-08-29 17:49:48.276135 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.77s 2025-08-29 17:49:48.276146 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:49:48.276157 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:49:48.276167 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:48.276178 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:49:48.276189 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task 7fed7c01-e3fd-40e7-a679-0d72f0e9126f is in state SUCCESS 2025-08-29 17:49:48.276199 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:49:48.276210 | orchestrator | 2025-08-29 17:49:48 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:48.276220 | orchestrator | 2025-08-29 17:49:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:51.248933 | orchestrator | 2025-08-29 17:49:51 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:49:51.249018 | orchestrator | 2025-08-29 17:49:51 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:49:51.249032 | orchestrator | 2025-08-29 17:49:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:51.249060 | orchestrator | 2025-08-29 17:49:51 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:49:51.249072 | orchestrator | 2025-08-29 17:49:51 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:49:51.249082 | orchestrator | 2025-08-29 17:49:51 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:51.249093 | orchestrator | 2025-08-29 17:49:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:54.147282 | orchestrator | 2025-08-29 17:49:54 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:49:54.149341 | orchestrator | 2025-08-29 17:49:54 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:49:54.150441 | orchestrator | 2025-08-29 17:49:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:54.152989 | orchestrator | 2025-08-29 17:49:54 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:49:54.155141 | orchestrator | 2025-08-29 17:49:54 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:49:54.156029 | orchestrator | 2025-08-29 17:49:54 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:54.156418 | orchestrator | 2025-08-29 17:49:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:49:57.216574 | orchestrator | 2025-08-29 17:49:57 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:49:57.216939 | orchestrator | 2025-08-29 17:49:57 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:49:57.217909 | orchestrator | 2025-08-29 17:49:57 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:49:57.218794 | orchestrator | 2025-08-29 17:49:57 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:49:57.219668 | orchestrator | 2025-08-29 17:49:57 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:49:57.224005 | orchestrator | 2025-08-29 17:49:57 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:49:57.224080 | orchestrator | 2025-08-29 17:49:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:00.261297 | orchestrator | 2025-08-29 17:50:00 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:00.262931 | orchestrator | 2025-08-29 17:50:00 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:50:00.263599 | orchestrator | 2025-08-29 17:50:00 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:00.264278 | orchestrator | 2025-08-29 17:50:00 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:00.265028 | orchestrator | 2025-08-29 17:50:00 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:00.265869 | orchestrator | 2025-08-29 17:50:00 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:00.265962 | orchestrator | 2025-08-29 17:50:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:03.353767 | orchestrator | 2025-08-29 17:50:03 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:03.353871 | orchestrator | 2025-08-29 17:50:03 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:50:03.353886 | orchestrator | 2025-08-29 17:50:03 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:03.353898 | orchestrator | 2025-08-29 17:50:03 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:03.353909 | orchestrator | 2025-08-29 17:50:03 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:03.353920 | orchestrator | 2025-08-29 17:50:03 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:03.353931 | orchestrator | 2025-08-29 17:50:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:06.373599 | orchestrator | 2025-08-29 17:50:06 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:06.375370 | orchestrator | 2025-08-29 17:50:06 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:50:06.377706 | orchestrator | 2025-08-29 17:50:06 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:06.378742 | orchestrator | 2025-08-29 17:50:06 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:06.381601 | orchestrator | 2025-08-29 17:50:06 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:06.383366 | orchestrator | 2025-08-29 17:50:06 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:06.383399 | orchestrator | 2025-08-29 17:50:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:09.432162 | orchestrator | 2025-08-29 17:50:09 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:09.434855 | orchestrator | 2025-08-29 17:50:09 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state STARTED 2025-08-29 17:50:09.437072 | orchestrator | 2025-08-29 17:50:09 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:09.441884 | orchestrator | 2025-08-29 17:50:09 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:09.447079 | orchestrator | 2025-08-29 17:50:09 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:09.448039 | orchestrator | 2025-08-29 17:50:09 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:09.448072 | orchestrator | 2025-08-29 17:50:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:12.510003 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:12.510176 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task af2bd48b-0b49-476d-bcb2-b2a3cf64cdc5 is in state SUCCESS 2025-08-29 17:50:12.510193 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:12.510205 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:12.510216 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:12.510227 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:12.510237 | orchestrator | 2025-08-29 17:50:12 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:12.510279 | orchestrator | 2025-08-29 17:50:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:15.555662 | orchestrator | 2025-08-29 17:50:15 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:15.556014 | orchestrator | 2025-08-29 17:50:15 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:15.559191 | orchestrator | 2025-08-29 17:50:15 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:15.559897 | orchestrator | 2025-08-29 17:50:15 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:15.560741 | orchestrator | 2025-08-29 17:50:15 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:15.561692 | orchestrator | 2025-08-29 17:50:15 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:15.561704 | orchestrator | 2025-08-29 17:50:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:18.605172 | orchestrator | 2025-08-29 17:50:18 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:18.605981 | orchestrator | 2025-08-29 17:50:18 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:18.607215 | orchestrator | 2025-08-29 17:50:18 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:18.608539 | orchestrator | 2025-08-29 17:50:18 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:18.609700 | orchestrator | 2025-08-29 17:50:18 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:18.611650 | orchestrator | 2025-08-29 17:50:18 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:18.611682 | orchestrator | 2025-08-29 17:50:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:21.704499 | orchestrator | 2025-08-29 17:50:21 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:21.705518 | orchestrator | 2025-08-29 17:50:21 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:21.708996 | orchestrator | 2025-08-29 17:50:21 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:21.710687 | orchestrator | 2025-08-29 17:50:21 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:21.712787 | orchestrator | 2025-08-29 17:50:21 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:21.714480 | orchestrator | 2025-08-29 17:50:21 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:21.714555 | orchestrator | 2025-08-29 17:50:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:24.838857 | orchestrator | 2025-08-29 17:50:24 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:24.838930 | orchestrator | 2025-08-29 17:50:24 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:24.838936 | orchestrator | 2025-08-29 17:50:24 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state STARTED 2025-08-29 17:50:24.838941 | orchestrator | 2025-08-29 17:50:24 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:24.838945 | orchestrator | 2025-08-29 17:50:24 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:24.838949 | orchestrator | 2025-08-29 17:50:24 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:24.838970 | orchestrator | 2025-08-29 17:50:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:27.863897 | orchestrator | 2025-08-29 17:50:27.863994 | orchestrator | 2025-08-29 17:50:27.864010 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:50:27.864022 | orchestrator | 2025-08-29 17:50:27.864034 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:50:27.864083 | orchestrator | Friday 29 August 2025 17:49:54 +0000 (0:00:01.032) 0:00:01.032 ********* 2025-08-29 17:50:27.864096 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:27.864108 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:27.864119 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:27.864129 | orchestrator | 2025-08-29 17:50:27.864140 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:50:27.864151 | orchestrator | Friday 29 August 2025 17:49:55 +0000 (0:00:00.962) 0:00:01.995 ********* 2025-08-29 17:50:27.864161 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 17:50:27.864172 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 17:50:27.864183 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 17:50:27.864194 | orchestrator | 2025-08-29 17:50:27.864204 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 17:50:27.864215 | orchestrator | 2025-08-29 17:50:27.864226 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 17:50:27.864236 | orchestrator | Friday 29 August 2025 17:49:56 +0000 (0:00:00.754) 0:00:02.749 ********* 2025-08-29 17:50:27.864247 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:50:27.864258 | orchestrator | 2025-08-29 17:50:27.864269 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 17:50:27.864279 | orchestrator | Friday 29 August 2025 17:49:57 +0000 (0:00:01.195) 0:00:03.945 ********* 2025-08-29 17:50:27.864290 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 17:50:27.864301 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 17:50:27.864312 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 17:50:27.864322 | orchestrator | 2025-08-29 17:50:27.864333 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 17:50:27.864344 | orchestrator | Friday 29 August 2025 17:49:58 +0000 (0:00:01.416) 0:00:05.362 ********* 2025-08-29 17:50:27.864354 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 17:50:27.864365 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 17:50:27.864376 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 17:50:27.864387 | orchestrator | 2025-08-29 17:50:27.864398 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 17:50:27.864408 | orchestrator | Friday 29 August 2025 17:50:01 +0000 (0:00:02.577) 0:00:07.940 ********* 2025-08-29 17:50:27.864419 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:27.864429 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:27.864440 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:27.864489 | orchestrator | 2025-08-29 17:50:27.864502 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 17:50:27.864514 | orchestrator | Friday 29 August 2025 17:50:04 +0000 (0:00:03.163) 0:00:11.103 ********* 2025-08-29 17:50:27.864526 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:27.864538 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:27.864550 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:27.864562 | orchestrator | 2025-08-29 17:50:27.864574 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:50:27.864587 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:27.864600 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:27.864636 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:27.864649 | orchestrator | 2025-08-29 17:50:27.864661 | orchestrator | 2025-08-29 17:50:27.864693 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:50:27.864706 | orchestrator | Friday 29 August 2025 17:50:08 +0000 (0:00:04.181) 0:00:15.285 ********* 2025-08-29 17:50:27.864718 | orchestrator | =============================================================================== 2025-08-29 17:50:27.864730 | orchestrator | memcached : Restart memcached container --------------------------------- 4.18s 2025-08-29 17:50:27.864742 | orchestrator | memcached : Check memcached container ----------------------------------- 3.16s 2025-08-29 17:50:27.864753 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.58s 2025-08-29 17:50:27.864766 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.42s 2025-08-29 17:50:27.864777 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.20s 2025-08-29 17:50:27.864789 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.96s 2025-08-29 17:50:27.864801 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-08-29 17:50:27.864814 | orchestrator | 2025-08-29 17:50:27.864827 | orchestrator | 2025-08-29 17:50:27 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:27.864838 | orchestrator | 2025-08-29 17:50:27 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:27.864848 | orchestrator | 2025-08-29 17:50:27 | INFO  | Task 9266b4b6-be29-423b-8325-02bbde400ddc is in state SUCCESS 2025-08-29 17:50:27.866738 | orchestrator | 2025-08-29 17:50:27.866774 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:50:27.866786 | orchestrator | 2025-08-29 17:50:27.866797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:50:27.866808 | orchestrator | Friday 29 August 2025 17:49:52 +0000 (0:00:00.633) 0:00:00.633 ********* 2025-08-29 17:50:27.866818 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:27.866829 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:27.866840 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:27.866851 | orchestrator | 2025-08-29 17:50:27.866862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:50:27.866872 | orchestrator | Friday 29 August 2025 17:49:53 +0000 (0:00:00.648) 0:00:01.282 ********* 2025-08-29 17:50:27.866883 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 17:50:27.866894 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 17:50:27.866905 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 17:50:27.866915 | orchestrator | 2025-08-29 17:50:27.866926 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 17:50:27.866937 | orchestrator | 2025-08-29 17:50:27.866947 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 17:50:27.866958 | orchestrator | Friday 29 August 2025 17:49:55 +0000 (0:00:01.807) 0:00:03.089 ********* 2025-08-29 17:50:27.866969 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:50:27.866980 | orchestrator | 2025-08-29 17:50:27.866990 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 17:50:27.867001 | orchestrator | Friday 29 August 2025 17:49:56 +0000 (0:00:01.416) 0:00:04.506 ********* 2025-08-29 17:50:27.867014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867124 | orchestrator | 2025-08-29 17:50:27.867135 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 17:50:27.867145 | orchestrator | Friday 29 August 2025 17:49:59 +0000 (0:00:02.262) 0:00:06.768 ********* 2025-08-29 17:50:27.867157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867246 | orchestrator | 2025-08-29 17:50:27.867258 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 17:50:27.867268 | orchestrator | Friday 29 August 2025 17:50:02 +0000 (0:00:03.354) 0:00:10.123 ********* 2025-08-29 17:50:27.867286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867479 | orchestrator | 2025-08-29 17:50:27.867494 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 17:50:27.867514 | orchestrator | Friday 29 August 2025 17:50:07 +0000 (0:00:04.657) 0:00:14.781 ********* 2025-08-29 17:50:27.867528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 17:50:27.867625 | orchestrator | 2025-08-29 17:50:27.867638 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 17:50:27.867650 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:02.775) 0:00:17.556 ********* 2025-08-29 17:50:27.867662 | orchestrator | 2025-08-29 17:50:27.867674 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 17:50:27.867685 | orchestrator | Friday 29 August 2025 17:50:10 +0000 (0:00:00.262) 0:00:17.818 ********* 2025-08-29 17:50:27.867696 | orchestrator | 2025-08-29 17:50:27.867706 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 17:50:27.867717 | orchestrator | Friday 29 August 2025 17:50:10 +0000 (0:00:00.500) 0:00:18.319 ********* 2025-08-29 17:50:27.867728 | orchestrator | 2025-08-29 17:50:27.867738 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 17:50:27.867749 | orchestrator | Friday 29 August 2025 17:50:11 +0000 (0:00:00.358) 0:00:18.678 ********* 2025-08-29 17:50:27.867760 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:27.867771 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:27.867782 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:27.867792 | orchestrator | 2025-08-29 17:50:27.867803 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 17:50:27.867814 | orchestrator | Friday 29 August 2025 17:50:19 +0000 (0:00:08.527) 0:00:27.205 ********* 2025-08-29 17:50:27.867825 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:27.867835 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:27.867846 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:27.867856 | orchestrator | 2025-08-29 17:50:27.867867 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:50:27.867878 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:27.867890 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:27.867901 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:27.867912 | orchestrator | 2025-08-29 17:50:27.867922 | orchestrator | 2025-08-29 17:50:27.867933 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:50:27.867944 | orchestrator | Friday 29 August 2025 17:50:26 +0000 (0:00:06.955) 0:00:34.161 ********* 2025-08-29 17:50:27.867954 | orchestrator | =============================================================================== 2025-08-29 17:50:27.867965 | orchestrator | redis : Restart redis container ----------------------------------------- 8.53s 2025-08-29 17:50:27.867976 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.95s 2025-08-29 17:50:27.867986 | orchestrator | redis : Copying over redis config files --------------------------------- 4.66s 2025-08-29 17:50:27.867996 | orchestrator | redis : Copying over default config.json files -------------------------- 3.35s 2025-08-29 17:50:27.868007 | orchestrator | redis : Check redis containers ------------------------------------------ 2.78s 2025-08-29 17:50:27.868018 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.26s 2025-08-29 17:50:27.868033 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.81s 2025-08-29 17:50:27.868043 | orchestrator | redis : include_tasks --------------------------------------------------- 1.42s 2025-08-29 17:50:27.868054 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.12s 2025-08-29 17:50:27.868065 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-08-29 17:50:27.868075 | orchestrator | 2025-08-29 17:50:27 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:27.868696 | orchestrator | 2025-08-29 17:50:27 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:27.869211 | orchestrator | 2025-08-29 17:50:27 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:27.869231 | orchestrator | 2025-08-29 17:50:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:31.404173 | orchestrator | 2025-08-29 17:50:31 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:31.413036 | orchestrator | 2025-08-29 17:50:31 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:31.418642 | orchestrator | 2025-08-29 17:50:31 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:31.427167 | orchestrator | 2025-08-29 17:50:31 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:31.430309 | orchestrator | 2025-08-29 17:50:31 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:31.430441 | orchestrator | 2025-08-29 17:50:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:34.614900 | orchestrator | 2025-08-29 17:50:34 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:34.615061 | orchestrator | 2025-08-29 17:50:34 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:34.615778 | orchestrator | 2025-08-29 17:50:34 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:34.616622 | orchestrator | 2025-08-29 17:50:34 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:34.617608 | orchestrator | 2025-08-29 17:50:34 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:34.617638 | orchestrator | 2025-08-29 17:50:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:37.662780 | orchestrator | 2025-08-29 17:50:37 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:37.663283 | orchestrator | 2025-08-29 17:50:37 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:37.664280 | orchestrator | 2025-08-29 17:50:37 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:37.665242 | orchestrator | 2025-08-29 17:50:37 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:37.666573 | orchestrator | 2025-08-29 17:50:37 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state STARTED 2025-08-29 17:50:37.666653 | orchestrator | 2025-08-29 17:50:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:40.700439 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:40.700594 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:40.702985 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task 57493341-c269-4e2e-a954-17d9e26da447 is in state STARTED 2025-08-29 17:50:40.703510 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:40.705396 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:40.706132 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task 36a61f6c-a38b-42d0-901b-c8f28392e3ca is in state STARTED 2025-08-29 17:50:40.709194 | orchestrator | 2025-08-29 17:50:40.709245 | orchestrator | 2025-08-29 17:50:40 | INFO  | Task 2cd8cbd2-624d-45ed-8f25-1309c9a13218 is in state SUCCESS 2025-08-29 17:50:40.710728 | orchestrator | 2025-08-29 17:50:40.710767 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 17:50:40.710823 | orchestrator | 2025-08-29 17:50:40.710836 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 17:50:40.710848 | orchestrator | Friday 29 August 2025 17:46:29 +0000 (0:00:00.259) 0:00:00.259 ********* 2025-08-29 17:50:40.710860 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.710873 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.710883 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.710894 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.711009 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.711041 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.711053 | orchestrator | 2025-08-29 17:50:40.711063 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 17:50:40.711074 | orchestrator | Friday 29 August 2025 17:46:30 +0000 (0:00:00.907) 0:00:01.167 ********* 2025-08-29 17:50:40.711085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.711097 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.711107 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.711118 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.711129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.711139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.711149 | orchestrator | 2025-08-29 17:50:40.711160 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 17:50:40.711171 | orchestrator | Friday 29 August 2025 17:46:31 +0000 (0:00:00.846) 0:00:02.013 ********* 2025-08-29 17:50:40.711182 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.711192 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.711202 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.711213 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.711223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.711234 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.711244 | orchestrator | 2025-08-29 17:50:40.711255 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 17:50:40.711265 | orchestrator | Friday 29 August 2025 17:46:32 +0000 (0:00:01.202) 0:00:03.215 ********* 2025-08-29 17:50:40.711276 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.711287 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.711297 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.711308 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.711320 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.711333 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.711345 | orchestrator | 2025-08-29 17:50:40.711356 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 17:50:40.711368 | orchestrator | Friday 29 August 2025 17:46:35 +0000 (0:00:02.878) 0:00:06.094 ********* 2025-08-29 17:50:40.711380 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.711392 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.711403 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.711415 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.711427 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.711439 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.711488 | orchestrator | 2025-08-29 17:50:40.711501 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 17:50:40.711513 | orchestrator | Friday 29 August 2025 17:46:37 +0000 (0:00:01.793) 0:00:07.887 ********* 2025-08-29 17:50:40.711523 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.711534 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.711545 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.711555 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.711565 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.711576 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.711586 | orchestrator | 2025-08-29 17:50:40.711596 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 17:50:40.711607 | orchestrator | Friday 29 August 2025 17:46:38 +0000 (0:00:01.347) 0:00:09.235 ********* 2025-08-29 17:50:40.711630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.711641 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.711651 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.711662 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.711672 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.711682 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.711693 | orchestrator | 2025-08-29 17:50:40.711704 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 17:50:40.711714 | orchestrator | Friday 29 August 2025 17:46:39 +0000 (0:00:01.076) 0:00:10.311 ********* 2025-08-29 17:50:40.711725 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.711735 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.711746 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.711756 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.711767 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.711777 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.711787 | orchestrator | 2025-08-29 17:50:40.711798 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 17:50:40.711809 | orchestrator | Friday 29 August 2025 17:46:40 +0000 (0:00:00.761) 0:00:11.072 ********* 2025-08-29 17:50:40.711819 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:50:40.711830 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:50:40.711840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.711851 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:50:40.711862 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:50:40.711872 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.711883 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:50:40.711893 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:50:40.711904 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.711915 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:50:40.711939 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:50:40.711951 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.711961 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:50:40.711972 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:50:40.711983 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.711993 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 17:50:40.712004 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 17:50:40.712022 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.712032 | orchestrator | 2025-08-29 17:50:40.712043 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 17:50:40.712054 | orchestrator | Friday 29 August 2025 17:46:41 +0000 (0:00:01.058) 0:00:12.131 ********* 2025-08-29 17:50:40.712065 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.712075 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.712086 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.712096 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.712106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.712117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.712127 | orchestrator | 2025-08-29 17:50:40.712138 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 17:50:40.712150 | orchestrator | Friday 29 August 2025 17:46:43 +0000 (0:00:02.214) 0:00:14.346 ********* 2025-08-29 17:50:40.712161 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.712178 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.712189 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.712199 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.712210 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.712220 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.712230 | orchestrator | 2025-08-29 17:50:40.712241 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 17:50:40.712252 | orchestrator | Friday 29 August 2025 17:46:45 +0000 (0:00:01.566) 0:00:15.912 ********* 2025-08-29 17:50:40.712262 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.712273 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.712283 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.712294 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.712304 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.712315 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.712325 | orchestrator | 2025-08-29 17:50:40.712336 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 17:50:40.712347 | orchestrator | Friday 29 August 2025 17:46:52 +0000 (0:00:06.784) 0:00:22.696 ********* 2025-08-29 17:50:40.712357 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.712368 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.712378 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.712389 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.712399 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.712410 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.712420 | orchestrator | 2025-08-29 17:50:40.712431 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 17:50:40.712475 | orchestrator | Friday 29 August 2025 17:46:54 +0000 (0:00:02.582) 0:00:25.279 ********* 2025-08-29 17:50:40.712487 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.712498 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.712508 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.712519 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.712530 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.712540 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.712550 | orchestrator | 2025-08-29 17:50:40.712561 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 17:50:40.712574 | orchestrator | Friday 29 August 2025 17:46:57 +0000 (0:00:02.504) 0:00:27.784 ********* 2025-08-29 17:50:40.712585 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.712595 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.712606 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.712616 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.712627 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.712637 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.712648 | orchestrator | 2025-08-29 17:50:40.712658 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 17:50:40.712669 | orchestrator | Friday 29 August 2025 17:46:58 +0000 (0:00:01.177) 0:00:28.961 ********* 2025-08-29 17:50:40.712680 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 17:50:40.712691 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 17:50:40.712701 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 17:50:40.712712 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 17:50:40.712722 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 17:50:40.712733 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 17:50:40.712743 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 17:50:40.712754 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 17:50:40.712764 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 17:50:40.712775 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 17:50:40.712785 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 17:50:40.712804 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 17:50:40.712815 | orchestrator | 2025-08-29 17:50:40.712826 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 17:50:40.712836 | orchestrator | Friday 29 August 2025 17:47:02 +0000 (0:00:04.212) 0:00:33.174 ********* 2025-08-29 17:50:40.712847 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.712858 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.712868 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.712879 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.712889 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.712899 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.712910 | orchestrator | 2025-08-29 17:50:40.712928 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 17:50:40.712939 | orchestrator | 2025-08-29 17:50:40.712950 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 17:50:40.712961 | orchestrator | Friday 29 August 2025 17:47:07 +0000 (0:00:04.721) 0:00:37.896 ********* 2025-08-29 17:50:40.712971 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.712982 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.712993 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.713003 | orchestrator | 2025-08-29 17:50:40.713014 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 17:50:40.713029 | orchestrator | Friday 29 August 2025 17:47:09 +0000 (0:00:01.730) 0:00:39.626 ********* 2025-08-29 17:50:40.713040 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.713051 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.713061 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.713072 | orchestrator | 2025-08-29 17:50:40.713082 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 17:50:40.713093 | orchestrator | Friday 29 August 2025 17:47:11 +0000 (0:00:01.906) 0:00:41.533 ********* 2025-08-29 17:50:40.713103 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.713114 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.713124 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.713134 | orchestrator | 2025-08-29 17:50:40.713145 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 17:50:40.713156 | orchestrator | Friday 29 August 2025 17:47:12 +0000 (0:00:01.607) 0:00:43.140 ********* 2025-08-29 17:50:40.713167 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.713177 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.713188 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.713198 | orchestrator | 2025-08-29 17:50:40.713209 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 17:50:40.713219 | orchestrator | Friday 29 August 2025 17:47:14 +0000 (0:00:01.351) 0:00:44.491 ********* 2025-08-29 17:50:40.713235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.713253 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.713272 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.713289 | orchestrator | 2025-08-29 17:50:40.713305 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 17:50:40.713323 | orchestrator | Friday 29 August 2025 17:47:15 +0000 (0:00:01.017) 0:00:45.509 ********* 2025-08-29 17:50:40.713340 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.713359 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.713378 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.713396 | orchestrator | 2025-08-29 17:50:40.713414 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 17:50:40.713426 | orchestrator | Friday 29 August 2025 17:47:16 +0000 (0:00:01.230) 0:00:46.740 ********* 2025-08-29 17:50:40.713437 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.713503 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.713515 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.713526 | orchestrator | 2025-08-29 17:50:40.713537 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 17:50:40.713558 | orchestrator | Friday 29 August 2025 17:47:19 +0000 (0:00:02.879) 0:00:49.619 ********* 2025-08-29 17:50:40.713568 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:50:40.713579 | orchestrator | 2025-08-29 17:50:40.713590 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 17:50:40.713601 | orchestrator | Friday 29 August 2025 17:47:20 +0000 (0:00:01.009) 0:00:50.629 ********* 2025-08-29 17:50:40.713611 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.713622 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.713633 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.713643 | orchestrator | 2025-08-29 17:50:40.713654 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 17:50:40.713665 | orchestrator | Friday 29 August 2025 17:47:23 +0000 (0:00:03.221) 0:00:53.851 ********* 2025-08-29 17:50:40.713675 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.713686 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.713697 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.713707 | orchestrator | 2025-08-29 17:50:40.713718 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 17:50:40.713729 | orchestrator | Friday 29 August 2025 17:47:24 +0000 (0:00:01.120) 0:00:54.971 ********* 2025-08-29 17:50:40.713740 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.713750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.713761 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.713772 | orchestrator | 2025-08-29 17:50:40.713782 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 17:50:40.713793 | orchestrator | Friday 29 August 2025 17:47:26 +0000 (0:00:01.806) 0:00:56.778 ********* 2025-08-29 17:50:40.713804 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.713814 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.713825 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.713836 | orchestrator | 2025-08-29 17:50:40.713846 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 17:50:40.713857 | orchestrator | Friday 29 August 2025 17:47:28 +0000 (0:00:02.088) 0:00:58.867 ********* 2025-08-29 17:50:40.713868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.713878 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.713889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.713900 | orchestrator | 2025-08-29 17:50:40.713910 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 17:50:40.713921 | orchestrator | Friday 29 August 2025 17:47:28 +0000 (0:00:00.482) 0:00:59.350 ********* 2025-08-29 17:50:40.713932 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.713942 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.713953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.713964 | orchestrator | 2025-08-29 17:50:40.713975 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 17:50:40.713986 | orchestrator | Friday 29 August 2025 17:47:30 +0000 (0:00:01.193) 0:01:00.544 ********* 2025-08-29 17:50:40.713996 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714007 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714065 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714077 | orchestrator | 2025-08-29 17:50:40.714095 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 17:50:40.714105 | orchestrator | Friday 29 August 2025 17:47:32 +0000 (0:00:01.900) 0:01:02.444 ********* 2025-08-29 17:50:40.714115 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 17:50:40.714131 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 17:50:40.714141 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 17:50:40.714158 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 17:50:40.714168 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 17:50:40.714178 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 17:50:40.714187 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 17:50:40.714197 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 17:50:40.714207 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 17:50:40.714216 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 17:50:40.714226 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 17:50:40.714235 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 17:50:40.714244 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.714254 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.714263 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.714273 | orchestrator | 2025-08-29 17:50:40.714282 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 17:50:40.714292 | orchestrator | Friday 29 August 2025 17:48:17 +0000 (0:00:45.508) 0:01:47.952 ********* 2025-08-29 17:50:40.714301 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.714311 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.714320 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.714329 | orchestrator | 2025-08-29 17:50:40.714339 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 17:50:40.714349 | orchestrator | Friday 29 August 2025 17:48:18 +0000 (0:00:00.573) 0:01:48.526 ********* 2025-08-29 17:50:40.714358 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714368 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714388 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714399 | orchestrator | 2025-08-29 17:50:40.714408 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 17:50:40.714428 | orchestrator | Friday 29 August 2025 17:48:19 +0000 (0:00:01.337) 0:01:49.863 ********* 2025-08-29 17:50:40.714438 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714464 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714473 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714483 | orchestrator | 2025-08-29 17:50:40.714492 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 17:50:40.714502 | orchestrator | Friday 29 August 2025 17:48:20 +0000 (0:00:01.494) 0:01:51.358 ********* 2025-08-29 17:50:40.714511 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714520 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714530 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714539 | orchestrator | 2025-08-29 17:50:40.714549 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 17:50:40.714558 | orchestrator | Friday 29 August 2025 17:48:45 +0000 (0:00:24.756) 0:02:16.114 ********* 2025-08-29 17:50:40.714568 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.714577 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.714587 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.714602 | orchestrator | 2025-08-29 17:50:40.714612 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 17:50:40.714621 | orchestrator | Friday 29 August 2025 17:48:46 +0000 (0:00:00.792) 0:02:16.907 ********* 2025-08-29 17:50:40.714631 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.714640 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.714649 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.714659 | orchestrator | 2025-08-29 17:50:40.714668 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 17:50:40.714678 | orchestrator | Friday 29 August 2025 17:48:47 +0000 (0:00:00.956) 0:02:17.864 ********* 2025-08-29 17:50:40.714687 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714697 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714706 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714715 | orchestrator | 2025-08-29 17:50:40.714725 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 17:50:40.714734 | orchestrator | Friday 29 August 2025 17:48:48 +0000 (0:00:00.710) 0:02:18.575 ********* 2025-08-29 17:50:40.714743 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.714758 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.714768 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.714778 | orchestrator | 2025-08-29 17:50:40.714787 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 17:50:40.714797 | orchestrator | Friday 29 August 2025 17:48:48 +0000 (0:00:00.726) 0:02:19.301 ********* 2025-08-29 17:50:40.714807 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.714816 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.714826 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.714835 | orchestrator | 2025-08-29 17:50:40.714845 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 17:50:40.714859 | orchestrator | Friday 29 August 2025 17:48:49 +0000 (0:00:00.322) 0:02:19.624 ********* 2025-08-29 17:50:40.714869 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714878 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714888 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714898 | orchestrator | 2025-08-29 17:50:40.714907 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 17:50:40.714917 | orchestrator | Friday 29 August 2025 17:48:50 +0000 (0:00:01.090) 0:02:20.714 ********* 2025-08-29 17:50:40.714926 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714936 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.714945 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.714954 | orchestrator | 2025-08-29 17:50:40.714964 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 17:50:40.714973 | orchestrator | Friday 29 August 2025 17:48:51 +0000 (0:00:00.705) 0:02:21.419 ********* 2025-08-29 17:50:40.714983 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.714992 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.715002 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.715011 | orchestrator | 2025-08-29 17:50:40.715021 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 17:50:40.715030 | orchestrator | Friday 29 August 2025 17:48:51 +0000 (0:00:00.954) 0:02:22.374 ********* 2025-08-29 17:50:40.715040 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:50:40.715049 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:50:40.715059 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:50:40.715068 | orchestrator | 2025-08-29 17:50:40.715078 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 17:50:40.715087 | orchestrator | Friday 29 August 2025 17:48:52 +0000 (0:00:00.923) 0:02:23.297 ********* 2025-08-29 17:50:40.715097 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.715106 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.715116 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.715125 | orchestrator | 2025-08-29 17:50:40.715135 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 17:50:40.715154 | orchestrator | Friday 29 August 2025 17:48:53 +0000 (0:00:00.628) 0:02:23.926 ********* 2025-08-29 17:50:40.715164 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.715173 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.715183 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.715192 | orchestrator | 2025-08-29 17:50:40.715202 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 17:50:40.715211 | orchestrator | Friday 29 August 2025 17:48:53 +0000 (0:00:00.313) 0:02:24.240 ********* 2025-08-29 17:50:40.715221 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.715230 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.715240 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.715249 | orchestrator | 2025-08-29 17:50:40.715259 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 17:50:40.715268 | orchestrator | Friday 29 August 2025 17:48:54 +0000 (0:00:00.893) 0:02:25.134 ********* 2025-08-29 17:50:40.715278 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.715287 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.715297 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.715306 | orchestrator | 2025-08-29 17:50:40.715316 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 17:50:40.715325 | orchestrator | Friday 29 August 2025 17:48:55 +0000 (0:00:00.820) 0:02:25.955 ********* 2025-08-29 17:50:40.715335 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 17:50:40.715345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 17:50:40.715354 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 17:50:40.715364 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 17:50:40.715373 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 17:50:40.715382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 17:50:40.715392 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 17:50:40.715402 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 17:50:40.715411 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 17:50:40.715421 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 17:50:40.715430 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 17:50:40.715440 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 17:50:40.715493 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 17:50:40.715510 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 17:50:40.715520 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 17:50:40.715529 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 17:50:40.715539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 17:50:40.715548 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 17:50:40.715563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 17:50:40.715573 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 17:50:40.715594 | orchestrator | 2025-08-29 17:50:40.715604 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 17:50:40.715613 | orchestrator | 2025-08-29 17:50:40.715623 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 17:50:40.715632 | orchestrator | Friday 29 August 2025 17:48:59 +0000 (0:00:03.439) 0:02:29.394 ********* 2025-08-29 17:50:40.715642 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.715651 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.715661 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.715670 | orchestrator | 2025-08-29 17:50:40.715680 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 17:50:40.715689 | orchestrator | Friday 29 August 2025 17:48:59 +0000 (0:00:00.409) 0:02:29.804 ********* 2025-08-29 17:50:40.715698 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.715708 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.715717 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.715726 | orchestrator | 2025-08-29 17:50:40.715736 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 17:50:40.715745 | orchestrator | Friday 29 August 2025 17:49:00 +0000 (0:00:00.733) 0:02:30.537 ********* 2025-08-29 17:50:40.715752 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.715760 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.715768 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.715776 | orchestrator | 2025-08-29 17:50:40.715784 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 17:50:40.715791 | orchestrator | Friday 29 August 2025 17:49:00 +0000 (0:00:00.755) 0:02:31.293 ********* 2025-08-29 17:50:40.715799 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:50:40.715808 | orchestrator | 2025-08-29 17:50:40.715815 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 17:50:40.715823 | orchestrator | Friday 29 August 2025 17:49:01 +0000 (0:00:00.668) 0:02:31.961 ********* 2025-08-29 17:50:40.715831 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.715839 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.715847 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.715854 | orchestrator | 2025-08-29 17:50:40.715862 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 17:50:40.715870 | orchestrator | Friday 29 August 2025 17:49:01 +0000 (0:00:00.355) 0:02:32.317 ********* 2025-08-29 17:50:40.715878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.715885 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.715893 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.715901 | orchestrator | 2025-08-29 17:50:40.715909 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 17:50:40.715916 | orchestrator | Friday 29 August 2025 17:49:02 +0000 (0:00:00.544) 0:02:32.861 ********* 2025-08-29 17:50:40.715924 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.715932 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.715940 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.715947 | orchestrator | 2025-08-29 17:50:40.715955 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 17:50:40.715963 | orchestrator | Friday 29 August 2025 17:49:02 +0000 (0:00:00.398) 0:02:33.260 ********* 2025-08-29 17:50:40.715971 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.715978 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.715986 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.715994 | orchestrator | 2025-08-29 17:50:40.716002 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 17:50:40.716009 | orchestrator | Friday 29 August 2025 17:49:03 +0000 (0:00:00.709) 0:02:33.970 ********* 2025-08-29 17:50:40.716017 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.716025 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.716033 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.716040 | orchestrator | 2025-08-29 17:50:40.716053 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 17:50:40.716061 | orchestrator | Friday 29 August 2025 17:49:04 +0000 (0:00:01.268) 0:02:35.238 ********* 2025-08-29 17:50:40.716069 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.716077 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.716084 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.716092 | orchestrator | 2025-08-29 17:50:40.716100 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 17:50:40.716107 | orchestrator | Friday 29 August 2025 17:49:06 +0000 (0:00:01.788) 0:02:37.027 ********* 2025-08-29 17:50:40.716115 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:50:40.716123 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:50:40.716130 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:50:40.716138 | orchestrator | 2025-08-29 17:50:40.716146 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 17:50:40.716153 | orchestrator | 2025-08-29 17:50:40.716161 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 17:50:40.716169 | orchestrator | Friday 29 August 2025 17:49:19 +0000 (0:00:13.158) 0:02:50.186 ********* 2025-08-29 17:50:40.716177 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.716185 | orchestrator | 2025-08-29 17:50:40.716192 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 17:50:40.716200 | orchestrator | Friday 29 August 2025 17:49:20 +0000 (0:00:00.831) 0:02:51.018 ********* 2025-08-29 17:50:40.716212 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716221 | orchestrator | 2025-08-29 17:50:40.716229 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 17:50:40.716236 | orchestrator | Friday 29 August 2025 17:49:21 +0000 (0:00:00.455) 0:02:51.474 ********* 2025-08-29 17:50:40.716244 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 17:50:40.716252 | orchestrator | 2025-08-29 17:50:40.716260 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 17:50:40.716268 | orchestrator | Friday 29 August 2025 17:49:21 +0000 (0:00:00.533) 0:02:52.007 ********* 2025-08-29 17:50:40.716280 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716288 | orchestrator | 2025-08-29 17:50:40.716296 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 17:50:40.716303 | orchestrator | Friday 29 August 2025 17:49:22 +0000 (0:00:00.880) 0:02:52.888 ********* 2025-08-29 17:50:40.716311 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716319 | orchestrator | 2025-08-29 17:50:40.716327 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 17:50:40.716335 | orchestrator | Friday 29 August 2025 17:49:23 +0000 (0:00:01.169) 0:02:54.057 ********* 2025-08-29 17:50:40.716342 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:50:40.716350 | orchestrator | 2025-08-29 17:50:40.716358 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 17:50:40.716366 | orchestrator | Friday 29 August 2025 17:49:25 +0000 (0:00:01.705) 0:02:55.763 ********* 2025-08-29 17:50:40.716373 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:50:40.716381 | orchestrator | 2025-08-29 17:50:40.716389 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 17:50:40.716397 | orchestrator | Friday 29 August 2025 17:49:26 +0000 (0:00:00.956) 0:02:56.720 ********* 2025-08-29 17:50:40.716404 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716412 | orchestrator | 2025-08-29 17:50:40.716420 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 17:50:40.716428 | orchestrator | Friday 29 August 2025 17:49:26 +0000 (0:00:00.472) 0:02:57.192 ********* 2025-08-29 17:50:40.716435 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716464 | orchestrator | 2025-08-29 17:50:40.716478 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 17:50:40.716491 | orchestrator | 2025-08-29 17:50:40.716515 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 17:50:40.716524 | orchestrator | Friday 29 August 2025 17:49:27 +0000 (0:00:00.490) 0:02:57.683 ********* 2025-08-29 17:50:40.716532 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.716539 | orchestrator | 2025-08-29 17:50:40.716547 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 17:50:40.716555 | orchestrator | Friday 29 August 2025 17:49:27 +0000 (0:00:00.156) 0:02:57.839 ********* 2025-08-29 17:50:40.716563 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:50:40.716571 | orchestrator | 2025-08-29 17:50:40.716578 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 17:50:40.716586 | orchestrator | Friday 29 August 2025 17:49:27 +0000 (0:00:00.242) 0:02:58.082 ********* 2025-08-29 17:50:40.716594 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.716602 | orchestrator | 2025-08-29 17:50:40.716609 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 17:50:40.716617 | orchestrator | Friday 29 August 2025 17:49:28 +0000 (0:00:01.084) 0:02:59.166 ********* 2025-08-29 17:50:40.716625 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.716633 | orchestrator | 2025-08-29 17:50:40.716640 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 17:50:40.716648 | orchestrator | Friday 29 August 2025 17:49:30 +0000 (0:00:01.796) 0:03:00.963 ********* 2025-08-29 17:50:40.716656 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716664 | orchestrator | 2025-08-29 17:50:40.716671 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 17:50:40.716679 | orchestrator | Friday 29 August 2025 17:49:31 +0000 (0:00:00.818) 0:03:01.782 ********* 2025-08-29 17:50:40.716687 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.716695 | orchestrator | 2025-08-29 17:50:40.716702 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 17:50:40.716710 | orchestrator | Friday 29 August 2025 17:49:31 +0000 (0:00:00.471) 0:03:02.253 ********* 2025-08-29 17:50:40.716718 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716725 | orchestrator | 2025-08-29 17:50:40.716733 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 17:50:40.716741 | orchestrator | Friday 29 August 2025 17:49:42 +0000 (0:00:10.231) 0:03:12.485 ********* 2025-08-29 17:50:40.716749 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.716757 | orchestrator | 2025-08-29 17:50:40.716765 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 17:50:40.716772 | orchestrator | Friday 29 August 2025 17:49:59 +0000 (0:00:17.507) 0:03:29.993 ********* 2025-08-29 17:50:40.716780 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.716788 | orchestrator | 2025-08-29 17:50:40.716796 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 17:50:40.716804 | orchestrator | 2025-08-29 17:50:40.716811 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 17:50:40.716819 | orchestrator | Friday 29 August 2025 17:50:00 +0000 (0:00:00.534) 0:03:30.527 ********* 2025-08-29 17:50:40.716827 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.716835 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.716843 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.716850 | orchestrator | 2025-08-29 17:50:40.716858 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 17:50:40.716866 | orchestrator | Friday 29 August 2025 17:50:00 +0000 (0:00:00.501) 0:03:31.029 ********* 2025-08-29 17:50:40.716873 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.716881 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.716889 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.716897 | orchestrator | 2025-08-29 17:50:40.716910 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 17:50:40.716918 | orchestrator | Friday 29 August 2025 17:50:00 +0000 (0:00:00.309) 0:03:31.338 ********* 2025-08-29 17:50:40.716931 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:50:40.716939 | orchestrator | 2025-08-29 17:50:40.716947 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 17:50:40.716955 | orchestrator | Friday 29 August 2025 17:50:01 +0000 (0:00:00.560) 0:03:31.899 ********* 2025-08-29 17:50:40.716963 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.716971 | orchestrator | 2025-08-29 17:50:40.716979 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 17:50:40.716986 | orchestrator | Friday 29 August 2025 17:50:01 +0000 (0:00:00.195) 0:03:32.094 ********* 2025-08-29 17:50:40.716994 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717002 | orchestrator | 2025-08-29 17:50:40.717010 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 17:50:40.717018 | orchestrator | Friday 29 August 2025 17:50:01 +0000 (0:00:00.280) 0:03:32.375 ********* 2025-08-29 17:50:40.717026 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717033 | orchestrator | 2025-08-29 17:50:40.717048 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 17:50:40.717056 | orchestrator | Friday 29 August 2025 17:50:02 +0000 (0:00:00.605) 0:03:32.981 ********* 2025-08-29 17:50:40.717064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717072 | orchestrator | 2025-08-29 17:50:40.717080 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 17:50:40.717088 | orchestrator | Friday 29 August 2025 17:50:02 +0000 (0:00:00.236) 0:03:33.217 ********* 2025-08-29 17:50:40.717095 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717103 | orchestrator | 2025-08-29 17:50:40.717111 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 17:50:40.717119 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:00.229) 0:03:33.447 ********* 2025-08-29 17:50:40.717126 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717134 | orchestrator | 2025-08-29 17:50:40.717142 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 17:50:40.717150 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:00.200) 0:03:33.647 ********* 2025-08-29 17:50:40.717157 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717165 | orchestrator | 2025-08-29 17:50:40.717173 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 17:50:40.717181 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:00.202) 0:03:33.850 ********* 2025-08-29 17:50:40.717188 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717196 | orchestrator | 2025-08-29 17:50:40.717204 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 17:50:40.717212 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:00.229) 0:03:34.080 ********* 2025-08-29 17:50:40.717220 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717227 | orchestrator | 2025-08-29 17:50:40.717235 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 17:50:40.717243 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:00.293) 0:03:34.374 ********* 2025-08-29 17:50:40.717251 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 17:50:40.717259 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 17:50:40.717266 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717274 | orchestrator | 2025-08-29 17:50:40.717282 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 17:50:40.717290 | orchestrator | Friday 29 August 2025 17:50:04 +0000 (0:00:00.387) 0:03:34.761 ********* 2025-08-29 17:50:40.717297 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717305 | orchestrator | 2025-08-29 17:50:40.717313 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 17:50:40.717320 | orchestrator | Friday 29 August 2025 17:50:04 +0000 (0:00:00.300) 0:03:35.062 ********* 2025-08-29 17:50:40.717333 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717341 | orchestrator | 2025-08-29 17:50:40.717349 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 17:50:40.717356 | orchestrator | Friday 29 August 2025 17:50:04 +0000 (0:00:00.224) 0:03:35.286 ********* 2025-08-29 17:50:40.717364 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717372 | orchestrator | 2025-08-29 17:50:40.717380 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 17:50:40.717387 | orchestrator | Friday 29 August 2025 17:50:05 +0000 (0:00:00.381) 0:03:35.668 ********* 2025-08-29 17:50:40.717395 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717403 | orchestrator | 2025-08-29 17:50:40.717411 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 17:50:40.717419 | orchestrator | Friday 29 August 2025 17:50:05 +0000 (0:00:00.350) 0:03:36.019 ********* 2025-08-29 17:50:40.717427 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717434 | orchestrator | 2025-08-29 17:50:40.717457 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 17:50:40.717465 | orchestrator | Friday 29 August 2025 17:50:05 +0000 (0:00:00.211) 0:03:36.231 ********* 2025-08-29 17:50:40.717473 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717481 | orchestrator | 2025-08-29 17:50:40.717489 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 17:50:40.717496 | orchestrator | Friday 29 August 2025 17:50:06 +0000 (0:00:00.869) 0:03:37.100 ********* 2025-08-29 17:50:40.717504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717512 | orchestrator | 2025-08-29 17:50:40.717520 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 17:50:40.717528 | orchestrator | Friday 29 August 2025 17:50:07 +0000 (0:00:00.309) 0:03:37.409 ********* 2025-08-29 17:50:40.717536 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717543 | orchestrator | 2025-08-29 17:50:40.717551 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 17:50:40.717564 | orchestrator | Friday 29 August 2025 17:50:07 +0000 (0:00:00.331) 0:03:37.741 ********* 2025-08-29 17:50:40.717572 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717580 | orchestrator | 2025-08-29 17:50:40.717588 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 17:50:40.717596 | orchestrator | Friday 29 August 2025 17:50:08 +0000 (0:00:00.719) 0:03:38.461 ********* 2025-08-29 17:50:40.717604 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717612 | orchestrator | 2025-08-29 17:50:40.717620 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 17:50:40.717631 | orchestrator | Friday 29 August 2025 17:50:08 +0000 (0:00:00.255) 0:03:38.716 ********* 2025-08-29 17:50:40.717640 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717648 | orchestrator | 2025-08-29 17:50:40.717656 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 17:50:40.717664 | orchestrator | Friday 29 August 2025 17:50:08 +0000 (0:00:00.272) 0:03:38.989 ********* 2025-08-29 17:50:40.717672 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 17:50:40.717680 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 17:50:40.717688 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 17:50:40.717696 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 17:50:40.717703 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717711 | orchestrator | 2025-08-29 17:50:40.717719 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 17:50:40.717727 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:00.600) 0:03:39.590 ********* 2025-08-29 17:50:40.717735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717743 | orchestrator | 2025-08-29 17:50:40.717751 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 17:50:40.717764 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:00.279) 0:03:39.870 ********* 2025-08-29 17:50:40.717772 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717780 | orchestrator | 2025-08-29 17:50:40.717788 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 17:50:40.717796 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:00.242) 0:03:40.112 ********* 2025-08-29 17:50:40.717804 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717812 | orchestrator | 2025-08-29 17:50:40.717819 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 17:50:40.717827 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:00.230) 0:03:40.343 ********* 2025-08-29 17:50:40.717835 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717843 | orchestrator | 2025-08-29 17:50:40.717851 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 17:50:40.717859 | orchestrator | Friday 29 August 2025 17:50:10 +0000 (0:00:00.441) 0:03:40.784 ********* 2025-08-29 17:50:40.717867 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 17:50:40.717876 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 17:50:40.717884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717891 | orchestrator | 2025-08-29 17:50:40.717899 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 17:50:40.717907 | orchestrator | Friday 29 August 2025 17:50:11 +0000 (0:00:00.696) 0:03:41.480 ********* 2025-08-29 17:50:40.717915 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.717923 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.717931 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.717938 | orchestrator | 2025-08-29 17:50:40.717947 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 17:50:40.717954 | orchestrator | Friday 29 August 2025 17:50:11 +0000 (0:00:00.805) 0:03:42.286 ********* 2025-08-29 17:50:40.717962 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.717970 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.717978 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.717986 | orchestrator | 2025-08-29 17:50:40.717994 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 17:50:40.718002 | orchestrator | 2025-08-29 17:50:40.718010 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 17:50:40.718043 | orchestrator | Friday 29 August 2025 17:50:13 +0000 (0:00:01.213) 0:03:43.500 ********* 2025-08-29 17:50:40.718051 | orchestrator | ok: [testbed-manager] 2025-08-29 17:50:40.718059 | orchestrator | 2025-08-29 17:50:40.718067 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 17:50:40.718075 | orchestrator | Friday 29 August 2025 17:50:13 +0000 (0:00:00.156) 0:03:43.656 ********* 2025-08-29 17:50:40.718082 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 17:50:40.718090 | orchestrator | 2025-08-29 17:50:40.718098 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 17:50:40.718106 | orchestrator | Friday 29 August 2025 17:50:13 +0000 (0:00:00.508) 0:03:44.165 ********* 2025-08-29 17:50:40.718114 | orchestrator | changed: [testbed-manager] 2025-08-29 17:50:40.718122 | orchestrator | 2025-08-29 17:50:40.718129 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 17:50:40.718137 | orchestrator | 2025-08-29 17:50:40.718145 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 17:50:40.718153 | orchestrator | Friday 29 August 2025 17:50:21 +0000 (0:00:07.366) 0:03:51.531 ********* 2025-08-29 17:50:40.718161 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:50:40.718169 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:50:40.718177 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:50:40.718185 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:50:40.718193 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:50:40.718206 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:50:40.718214 | orchestrator | 2025-08-29 17:50:40.718222 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 17:50:40.718230 | orchestrator | Friday 29 August 2025 17:50:22 +0000 (0:00:01.074) 0:03:52.605 ********* 2025-08-29 17:50:40.718243 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 17:50:40.718251 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 17:50:40.718259 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 17:50:40.718266 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 17:50:40.718274 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 17:50:40.718286 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 17:50:40.718294 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 17:50:40.718302 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 17:50:40.718310 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 17:50:40.718318 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 17:50:40.718325 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 17:50:40.718333 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 17:50:40.718341 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 17:50:40.718349 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 17:50:40.718357 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 17:50:40.718364 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 17:50:40.718372 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 17:50:40.718380 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 17:50:40.718387 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 17:50:40.718395 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 17:50:40.718403 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 17:50:40.718411 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 17:50:40.718419 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 17:50:40.718427 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 17:50:40.718434 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 17:50:40.718477 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 17:50:40.718486 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 17:50:40.718494 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 17:50:40.718502 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 17:50:40.718510 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 17:50:40.718518 | orchestrator | 2025-08-29 17:50:40.718526 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 17:50:40.718534 | orchestrator | Friday 29 August 2025 17:50:38 +0000 (0:00:16.019) 0:04:08.625 ********* 2025-08-29 17:50:40.718546 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.718554 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.718562 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.718570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.718577 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.718585 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.718593 | orchestrator | 2025-08-29 17:50:40.718601 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 17:50:40.718609 | orchestrator | Friday 29 August 2025 17:50:38 +0000 (0:00:00.501) 0:04:09.127 ********* 2025-08-29 17:50:40.718617 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:50:40.718625 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:50:40.718632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:50:40.718640 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:50:40.718648 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:50:40.718656 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:50:40.718663 | orchestrator | 2025-08-29 17:50:40.718671 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:50:40.718680 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:50:40.718689 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 17:50:40.718697 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 17:50:40.718711 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 17:50:40.718719 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 17:50:40.718727 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 17:50:40.718739 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 17:50:40.718747 | orchestrator | 2025-08-29 17:50:40.718755 | orchestrator | 2025-08-29 17:50:40.718763 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:50:40.718770 | orchestrator | Friday 29 August 2025 17:50:39 +0000 (0:00:00.598) 0:04:09.726 ********* 2025-08-29 17:50:40.718778 | orchestrator | =============================================================================== 2025-08-29 17:50:40.718786 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.51s 2025-08-29 17:50:40.718794 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.76s 2025-08-29 17:50:40.718803 | orchestrator | kubectl : Install required packages ------------------------------------ 17.51s 2025-08-29 17:50:40.718810 | orchestrator | Manage labels ---------------------------------------------------------- 16.02s 2025-08-29 17:50:40.718818 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.16s 2025-08-29 17:50:40.718826 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.23s 2025-08-29 17:50:40.718834 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.37s 2025-08-29 17:50:40.718841 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.78s 2025-08-29 17:50:40.718849 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 4.72s 2025-08-29 17:50:40.718857 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 4.21s 2025-08-29 17:50:40.718870 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.44s 2025-08-29 17:50:40.718878 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.22s 2025-08-29 17:50:40.718886 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.88s 2025-08-29 17:50:40.718894 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.88s 2025-08-29 17:50:40.718901 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.58s 2025-08-29 17:50:40.718909 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.50s 2025-08-29 17:50:40.718917 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.21s 2025-08-29 17:50:40.718925 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.09s 2025-08-29 17:50:40.718933 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.91s 2025-08-29 17:50:40.718941 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.90s 2025-08-29 17:50:40.718949 | orchestrator | 2025-08-29 17:50:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:43.791833 | orchestrator | 2025-08-29 17:50:43 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:43.791934 | orchestrator | 2025-08-29 17:50:43 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:43.791948 | orchestrator | 2025-08-29 17:50:43 | INFO  | Task 57493341-c269-4e2e-a954-17d9e26da447 is in state STARTED 2025-08-29 17:50:43.791960 | orchestrator | 2025-08-29 17:50:43 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:43.793378 | orchestrator | 2025-08-29 17:50:43 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:43.793970 | orchestrator | 2025-08-29 17:50:43 | INFO  | Task 36a61f6c-a38b-42d0-901b-c8f28392e3ca is in state STARTED 2025-08-29 17:50:43.794202 | orchestrator | 2025-08-29 17:50:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:46.856931 | orchestrator | 2025-08-29 17:50:46 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:46.857236 | orchestrator | 2025-08-29 17:50:46 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:46.857929 | orchestrator | 2025-08-29 17:50:46 | INFO  | Task 57493341-c269-4e2e-a954-17d9e26da447 is in state STARTED 2025-08-29 17:50:46.859505 | orchestrator | 2025-08-29 17:50:46 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:46.861426 | orchestrator | 2025-08-29 17:50:46 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:46.863164 | orchestrator | 2025-08-29 17:50:46 | INFO  | Task 36a61f6c-a38b-42d0-901b-c8f28392e3ca is in state STARTED 2025-08-29 17:50:46.863249 | orchestrator | 2025-08-29 17:50:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:50.026338 | orchestrator | 2025-08-29 17:50:50 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:50.043526 | orchestrator | 2025-08-29 17:50:50 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:50.043641 | orchestrator | 2025-08-29 17:50:50 | INFO  | Task 57493341-c269-4e2e-a954-17d9e26da447 is in state STARTED 2025-08-29 17:50:50.047648 | orchestrator | 2025-08-29 17:50:50 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:50.048857 | orchestrator | 2025-08-29 17:50:50 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:50.049567 | orchestrator | 2025-08-29 17:50:50 | INFO  | Task 36a61f6c-a38b-42d0-901b-c8f28392e3ca is in state SUCCESS 2025-08-29 17:50:50.049906 | orchestrator | 2025-08-29 17:50:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:53.114545 | orchestrator | 2025-08-29 17:50:53 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:53.114616 | orchestrator | 2025-08-29 17:50:53 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:53.114622 | orchestrator | 2025-08-29 17:50:53 | INFO  | Task 57493341-c269-4e2e-a954-17d9e26da447 is in state STARTED 2025-08-29 17:50:53.114626 | orchestrator | 2025-08-29 17:50:53 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:53.114631 | orchestrator | 2025-08-29 17:50:53 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:53.114635 | orchestrator | 2025-08-29 17:50:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:56.150681 | orchestrator | 2025-08-29 17:50:56 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:56.150794 | orchestrator | 2025-08-29 17:50:56 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:56.152729 | orchestrator | 2025-08-29 17:50:56 | INFO  | Task 57493341-c269-4e2e-a954-17d9e26da447 is in state SUCCESS 2025-08-29 17:50:56.155364 | orchestrator | 2025-08-29 17:50:56 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:56.163900 | orchestrator | 2025-08-29 17:50:56 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:56.163939 | orchestrator | 2025-08-29 17:50:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:50:59.254667 | orchestrator | 2025-08-29 17:50:59 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:50:59.254772 | orchestrator | 2025-08-29 17:50:59 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:50:59.254787 | orchestrator | 2025-08-29 17:50:59 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:50:59.254799 | orchestrator | 2025-08-29 17:50:59 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:50:59.254810 | orchestrator | 2025-08-29 17:50:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:02.270128 | orchestrator | 2025-08-29 17:51:02 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:51:02.270626 | orchestrator | 2025-08-29 17:51:02 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:02.271376 | orchestrator | 2025-08-29 17:51:02 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:02.273714 | orchestrator | 2025-08-29 17:51:02 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:02.273738 | orchestrator | 2025-08-29 17:51:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:05.313391 | orchestrator | 2025-08-29 17:51:05 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:51:05.315015 | orchestrator | 2025-08-29 17:51:05 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:05.316089 | orchestrator | 2025-08-29 17:51:05 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:05.317883 | orchestrator | 2025-08-29 17:51:05 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:05.317968 | orchestrator | 2025-08-29 17:51:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:08.354277 | orchestrator | 2025-08-29 17:51:08 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:51:08.354370 | orchestrator | 2025-08-29 17:51:08 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:08.355081 | orchestrator | 2025-08-29 17:51:08 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:08.355843 | orchestrator | 2025-08-29 17:51:08 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:08.355878 | orchestrator | 2025-08-29 17:51:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:11.408706 | orchestrator | 2025-08-29 17:51:11 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:51:11.408802 | orchestrator | 2025-08-29 17:51:11 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:11.409895 | orchestrator | 2025-08-29 17:51:11 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:11.410720 | orchestrator | 2025-08-29 17:51:11 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:11.410764 | orchestrator | 2025-08-29 17:51:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:14.447696 | orchestrator | 2025-08-29 17:51:14 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state STARTED 2025-08-29 17:51:14.448060 | orchestrator | 2025-08-29 17:51:14 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:14.448669 | orchestrator | 2025-08-29 17:51:14 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:14.449508 | orchestrator | 2025-08-29 17:51:14 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:14.449541 | orchestrator | 2025-08-29 17:51:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:17.476678 | orchestrator | 2025-08-29 17:51:17 | INFO  | Task b3af539c-a7d3-48c0-b63d-31a4814edffd is in state SUCCESS 2025-08-29 17:51:17.477747 | orchestrator | 2025-08-29 17:51:17.477790 | orchestrator | 2025-08-29 17:51:17.477804 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 17:51:17.477815 | orchestrator | 2025-08-29 17:51:17.477826 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 17:51:17.477838 | orchestrator | Friday 29 August 2025 17:50:43 +0000 (0:00:00.210) 0:00:00.210 ********* 2025-08-29 17:51:17.477849 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 17:51:17.477860 | orchestrator | 2025-08-29 17:51:17.477872 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 17:51:17.477883 | orchestrator | Friday 29 August 2025 17:50:44 +0000 (0:00:00.956) 0:00:01.167 ********* 2025-08-29 17:51:17.477894 | orchestrator | changed: [testbed-manager] 2025-08-29 17:51:17.478130 | orchestrator | 2025-08-29 17:51:17.478146 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 17:51:17.478157 | orchestrator | Friday 29 August 2025 17:50:46 +0000 (0:00:01.378) 0:00:02.545 ********* 2025-08-29 17:51:17.478168 | orchestrator | changed: [testbed-manager] 2025-08-29 17:51:17.478179 | orchestrator | 2025-08-29 17:51:17.478221 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:51:17.478233 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:51:17.478246 | orchestrator | 2025-08-29 17:51:17.478256 | orchestrator | 2025-08-29 17:51:17.478267 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:51:17.478278 | orchestrator | Friday 29 August 2025 17:50:46 +0000 (0:00:00.698) 0:00:03.243 ********* 2025-08-29 17:51:17.478314 | orchestrator | =============================================================================== 2025-08-29 17:51:17.478326 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.38s 2025-08-29 17:51:17.478336 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.96s 2025-08-29 17:51:17.478347 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.70s 2025-08-29 17:51:17.478358 | orchestrator | 2025-08-29 17:51:17.478368 | orchestrator | 2025-08-29 17:51:17.478379 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 17:51:17.478390 | orchestrator | 2025-08-29 17:51:17.478400 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 17:51:17.478411 | orchestrator | Friday 29 August 2025 17:50:44 +0000 (0:00:00.161) 0:00:00.161 ********* 2025-08-29 17:51:17.478529 | orchestrator | ok: [testbed-manager] 2025-08-29 17:51:17.478546 | orchestrator | 2025-08-29 17:51:17.478556 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 17:51:17.478567 | orchestrator | Friday 29 August 2025 17:50:45 +0000 (0:00:00.846) 0:00:01.007 ********* 2025-08-29 17:51:17.478577 | orchestrator | ok: [testbed-manager] 2025-08-29 17:51:17.478587 | orchestrator | 2025-08-29 17:51:17.478598 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 17:51:17.478608 | orchestrator | Friday 29 August 2025 17:50:46 +0000 (0:00:00.881) 0:00:01.889 ********* 2025-08-29 17:51:17.478619 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 17:51:17.478629 | orchestrator | 2025-08-29 17:51:17.478640 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 17:51:17.478651 | orchestrator | Friday 29 August 2025 17:50:47 +0000 (0:00:00.870) 0:00:02.759 ********* 2025-08-29 17:51:17.478661 | orchestrator | changed: [testbed-manager] 2025-08-29 17:51:17.478672 | orchestrator | 2025-08-29 17:51:17.478682 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 17:51:17.478693 | orchestrator | Friday 29 August 2025 17:50:48 +0000 (0:00:01.069) 0:00:03.828 ********* 2025-08-29 17:51:17.478703 | orchestrator | changed: [testbed-manager] 2025-08-29 17:51:17.478714 | orchestrator | 2025-08-29 17:51:17.478724 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 17:51:17.478749 | orchestrator | Friday 29 August 2025 17:50:49 +0000 (0:00:00.768) 0:00:04.597 ********* 2025-08-29 17:51:17.478760 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:51:17.478797 | orchestrator | 2025-08-29 17:51:17.478808 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 17:51:17.478819 | orchestrator | Friday 29 August 2025 17:50:51 +0000 (0:00:02.347) 0:00:06.944 ********* 2025-08-29 17:51:17.478830 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 17:51:17.478840 | orchestrator | 2025-08-29 17:51:17.478851 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 17:51:17.478861 | orchestrator | Friday 29 August 2025 17:50:52 +0000 (0:00:01.112) 0:00:08.056 ********* 2025-08-29 17:51:17.478872 | orchestrator | ok: [testbed-manager] 2025-08-29 17:51:17.478883 | orchestrator | 2025-08-29 17:51:17.478894 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 17:51:17.478904 | orchestrator | Friday 29 August 2025 17:50:53 +0000 (0:00:00.419) 0:00:08.476 ********* 2025-08-29 17:51:17.478915 | orchestrator | ok: [testbed-manager] 2025-08-29 17:51:17.478926 | orchestrator | 2025-08-29 17:51:17.478936 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:51:17.478947 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:51:17.478958 | orchestrator | 2025-08-29 17:51:17.478984 | orchestrator | 2025-08-29 17:51:17.478995 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:51:17.479006 | orchestrator | Friday 29 August 2025 17:50:53 +0000 (0:00:00.338) 0:00:08.815 ********* 2025-08-29 17:51:17.479027 | orchestrator | =============================================================================== 2025-08-29 17:51:17.479038 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.35s 2025-08-29 17:51:17.479048 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.11s 2025-08-29 17:51:17.479059 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.07s 2025-08-29 17:51:17.479158 | orchestrator | Create .kube directory -------------------------------------------------- 0.88s 2025-08-29 17:51:17.479174 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.87s 2025-08-29 17:51:17.479185 | orchestrator | Get home directory of operator user ------------------------------------- 0.85s 2025-08-29 17:51:17.479196 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.77s 2025-08-29 17:51:17.479206 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2025-08-29 17:51:17.479217 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2025-08-29 17:51:17.479228 | orchestrator | 2025-08-29 17:51:17.479238 | orchestrator | 2025-08-29 17:51:17.479249 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:51:17.479260 | orchestrator | 2025-08-29 17:51:17.479270 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:51:17.479281 | orchestrator | Friday 29 August 2025 17:49:52 +0000 (0:00:00.311) 0:00:00.311 ********* 2025-08-29 17:51:17.479291 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:51:17.479302 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:51:17.479313 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:51:17.479323 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:51:17.479334 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:51:17.479344 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:51:17.479355 | orchestrator | 2025-08-29 17:51:17.479366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:51:17.479377 | orchestrator | Friday 29 August 2025 17:49:53 +0000 (0:00:01.587) 0:00:01.899 ********* 2025-08-29 17:51:17.479387 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:51:17.479398 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:51:17.479409 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:51:17.479420 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:51:17.479464 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:51:17.479477 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 17:51:17.479487 | orchestrator | 2025-08-29 17:51:17.479498 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 17:51:17.479509 | orchestrator | 2025-08-29 17:51:17.479519 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 17:51:17.479530 | orchestrator | Friday 29 August 2025 17:49:56 +0000 (0:00:02.283) 0:00:04.183 ********* 2025-08-29 17:51:17.479542 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:51:17.479553 | orchestrator | 2025-08-29 17:51:17.479564 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 17:51:17.479575 | orchestrator | Friday 29 August 2025 17:49:58 +0000 (0:00:02.258) 0:00:06.442 ********* 2025-08-29 17:51:17.479585 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 17:51:17.479596 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 17:51:17.479607 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 17:51:17.479617 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 17:51:17.479628 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 17:51:17.479648 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 17:51:17.479659 | orchestrator | 2025-08-29 17:51:17.479670 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 17:51:17.479680 | orchestrator | Friday 29 August 2025 17:50:00 +0000 (0:00:01.846) 0:00:08.288 ********* 2025-08-29 17:51:17.479772 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 17:51:17.479790 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 17:51:17.479800 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 17:51:17.479811 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 17:51:17.479821 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 17:51:17.479832 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 17:51:17.479842 | orchestrator | 2025-08-29 17:51:17.479853 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 17:51:17.479864 | orchestrator | Friday 29 August 2025 17:50:02 +0000 (0:00:02.008) 0:00:10.296 ********* 2025-08-29 17:51:17.479944 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 17:51:17.479956 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:51:17.479967 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 17:51:17.479978 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:51:17.479989 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 17:51:17.479999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:51:17.480010 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 17:51:17.480021 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:51:17.480031 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 17:51:17.480042 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:51:17.480053 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 17:51:17.480063 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:51:17.480074 | orchestrator | 2025-08-29 17:51:17.480085 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 17:51:17.480096 | orchestrator | Friday 29 August 2025 17:50:05 +0000 (0:00:03.803) 0:00:14.100 ********* 2025-08-29 17:51:17.480107 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:51:17.480117 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:51:17.480128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:51:17.480150 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:51:17.480161 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:51:17.480172 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:51:17.480182 | orchestrator | 2025-08-29 17:51:17.480193 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 17:51:17.480204 | orchestrator | Friday 29 August 2025 17:50:07 +0000 (0:00:01.985) 0:00:16.085 ********* 2025-08-29 17:51:17.480218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480409 | orchestrator | 2025-08-29 17:51:17.480420 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 17:51:17.480487 | orchestrator | Friday 29 August 2025 17:50:11 +0000 (0:00:03.284) 0:00:19.369 ********* 2025-08-29 17:51:17.480501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.480727 | orchestrator | 2025-08-29 17:51:17.480739 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 17:51:17.480750 | orchestrator | Friday 29 August 2025 17:50:16 +0000 (0:00:05.675) 0:00:25.044 ********* 2025-08-29 17:51:17.480761 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:51:17.480772 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:51:17.480782 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:51:17.480793 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:51:17.480803 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:51:17.480813 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:51:17.480825 | orchestrator | 2025-08-29 17:51:17.480835 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 17:51:17.480846 | orchestrator | Friday 29 August 2025 17:50:18 +0000 (0:00:02.086) 0:00:27.131 ********* 2025-08-29 17:51:17.480858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 17:51:17.481674 | orchestrator | 2025-08-29 17:51:17.481684 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:51:17.481694 | orchestrator | Friday 29 August 2025 17:50:23 +0000 (0:00:05.016) 0:00:32.147 ********* 2025-08-29 17:51:17.481703 | orchestrator | 2025-08-29 17:51:17.481713 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:51:17.481722 | orchestrator | Friday 29 August 2025 17:50:24 +0000 (0:00:00.289) 0:00:32.437 ********* 2025-08-29 17:51:17.481732 | orchestrator | 2025-08-29 17:51:17.481741 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:51:17.481751 | orchestrator | Friday 29 August 2025 17:50:24 +0000 (0:00:00.168) 0:00:32.606 ********* 2025-08-29 17:51:17.481760 | orchestrator | 2025-08-29 17:51:17.481770 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:51:17.481779 | orchestrator | Friday 29 August 2025 17:50:24 +0000 (0:00:00.166) 0:00:32.772 ********* 2025-08-29 17:51:17.481788 | orchestrator | 2025-08-29 17:51:17.481798 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:51:17.481807 | orchestrator | Friday 29 August 2025 17:50:24 +0000 (0:00:00.242) 0:00:33.014 ********* 2025-08-29 17:51:17.481816 | orchestrator | 2025-08-29 17:51:17.481826 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 17:51:17.481835 | orchestrator | Friday 29 August 2025 17:50:25 +0000 (0:00:00.402) 0:00:33.417 ********* 2025-08-29 17:51:17.481844 | orchestrator | 2025-08-29 17:51:17.481854 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 17:51:17.481863 | orchestrator | Friday 29 August 2025 17:50:25 +0000 (0:00:00.322) 0:00:33.740 ********* 2025-08-29 17:51:17.481879 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:51:17.481889 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:51:17.481899 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:51:17.481908 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:51:17.481917 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:51:17.481927 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:51:17.481936 | orchestrator | 2025-08-29 17:51:17.481946 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 17:51:17.481955 | orchestrator | Friday 29 August 2025 17:50:38 +0000 (0:00:13.407) 0:00:47.148 ********* 2025-08-29 17:51:17.481965 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:51:17.481975 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:51:17.481984 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:51:17.481993 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:51:17.482002 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:51:17.482012 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:51:17.482076 | orchestrator | 2025-08-29 17:51:17.482086 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 17:51:17.482102 | orchestrator | Friday 29 August 2025 17:50:40 +0000 (0:00:01.407) 0:00:48.555 ********* 2025-08-29 17:51:17.482113 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:51:17.482123 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:51:17.482133 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:51:17.482142 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:51:17.482151 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:51:17.482161 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:51:17.482170 | orchestrator | 2025-08-29 17:51:17.482180 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 17:51:17.482194 | orchestrator | Friday 29 August 2025 17:50:51 +0000 (0:00:10.948) 0:00:59.504 ********* 2025-08-29 17:51:17.482204 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 17:51:17.482214 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 17:51:17.482224 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 17:51:17.482234 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 17:51:17.482244 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 17:51:17.482253 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 17:51:17.482263 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 17:51:17.482272 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 17:51:17.482282 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 17:51:17.482292 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 17:51:17.482301 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 17:51:17.482311 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 17:51:17.482321 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:51:17.482331 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:51:17.482340 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:51:17.482357 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:51:17.482366 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:51:17.482376 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 17:51:17.482386 | orchestrator | 2025-08-29 17:51:17.482396 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 17:51:17.482405 | orchestrator | Friday 29 August 2025 17:50:59 +0000 (0:00:08.065) 0:01:07.569 ********* 2025-08-29 17:51:17.482415 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 17:51:17.482424 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:51:17.482487 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 17:51:17.482499 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:51:17.482509 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 17:51:17.482518 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:51:17.482528 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 17:51:17.482537 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 17:51:17.482547 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 17:51:17.482556 | orchestrator | 2025-08-29 17:51:17.482566 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 17:51:17.482576 | orchestrator | Friday 29 August 2025 17:51:02 +0000 (0:00:02.696) 0:01:10.265 ********* 2025-08-29 17:51:17.482585 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 17:51:17.482593 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:51:17.482601 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 17:51:17.482609 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:51:17.482617 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 17:51:17.482625 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:51:17.482633 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 17:51:17.482640 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 17:51:17.482648 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 17:51:17.482656 | orchestrator | 2025-08-29 17:51:17.482664 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 17:51:17.482672 | orchestrator | Friday 29 August 2025 17:51:05 +0000 (0:00:03.707) 0:01:13.973 ********* 2025-08-29 17:51:17.482680 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:51:17.482687 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:51:17.482701 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:51:17.482709 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:51:17.482717 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:51:17.482725 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:51:17.482733 | orchestrator | 2025-08-29 17:51:17.482740 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:51:17.482753 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:51:17.482763 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:51:17.482771 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 17:51:17.482779 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:51:17.482787 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:51:17.482801 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 17:51:17.482809 | orchestrator | 2025-08-29 17:51:17.482817 | orchestrator | 2025-08-29 17:51:17.482825 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:51:17.482832 | orchestrator | Friday 29 August 2025 17:51:14 +0000 (0:00:08.503) 0:01:22.476 ********* 2025-08-29 17:51:17.482840 | orchestrator | =============================================================================== 2025-08-29 17:51:17.482848 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.45s 2025-08-29 17:51:17.482856 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 13.41s 2025-08-29 17:51:17.482864 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.07s 2025-08-29 17:51:17.482871 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.67s 2025-08-29 17:51:17.482879 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 5.02s 2025-08-29 17:51:17.482887 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.80s 2025-08-29 17:51:17.482895 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.71s 2025-08-29 17:51:17.482903 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.28s 2025-08-29 17:51:17.482910 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.70s 2025-08-29 17:51:17.482918 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.28s 2025-08-29 17:51:17.482926 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.26s 2025-08-29 17:51:17.482934 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.09s 2025-08-29 17:51:17.482942 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.01s 2025-08-29 17:51:17.482949 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.99s 2025-08-29 17:51:17.482957 | orchestrator | module-load : Load modules ---------------------------------------------- 1.85s 2025-08-29 17:51:17.482965 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.59s 2025-08-29 17:51:17.482973 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.59s 2025-08-29 17:51:17.482980 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.41s 2025-08-29 17:51:17.482988 | orchestrator | 2025-08-29 17:51:17 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:17.482996 | orchestrator | 2025-08-29 17:51:17 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:17.483004 | orchestrator | 2025-08-29 17:51:17 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:17.483012 | orchestrator | 2025-08-29 17:51:17 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:17.483020 | orchestrator | 2025-08-29 17:51:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:20.506852 | orchestrator | 2025-08-29 17:51:20 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:20.507525 | orchestrator | 2025-08-29 17:51:20 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:20.512845 | orchestrator | 2025-08-29 17:51:20 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:20.513558 | orchestrator | 2025-08-29 17:51:20 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:20.513590 | orchestrator | 2025-08-29 17:51:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:23.542471 | orchestrator | 2025-08-29 17:51:23 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:23.542684 | orchestrator | 2025-08-29 17:51:23 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:23.543192 | orchestrator | 2025-08-29 17:51:23 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:23.544634 | orchestrator | 2025-08-29 17:51:23 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:23.544643 | orchestrator | 2025-08-29 17:51:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:26.576297 | orchestrator | 2025-08-29 17:51:26 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:26.578255 | orchestrator | 2025-08-29 17:51:26 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:26.579675 | orchestrator | 2025-08-29 17:51:26 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:26.580677 | orchestrator | 2025-08-29 17:51:26 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:26.580874 | orchestrator | 2025-08-29 17:51:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:29.635116 | orchestrator | 2025-08-29 17:51:29 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:29.635765 | orchestrator | 2025-08-29 17:51:29 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:29.636553 | orchestrator | 2025-08-29 17:51:29 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:29.637402 | orchestrator | 2025-08-29 17:51:29 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:29.637445 | orchestrator | 2025-08-29 17:51:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:32.680739 | orchestrator | 2025-08-29 17:51:32 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:32.681297 | orchestrator | 2025-08-29 17:51:32 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:32.684308 | orchestrator | 2025-08-29 17:51:32 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:32.685831 | orchestrator | 2025-08-29 17:51:32 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:32.685868 | orchestrator | 2025-08-29 17:51:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:35.783994 | orchestrator | 2025-08-29 17:51:35 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:35.784102 | orchestrator | 2025-08-29 17:51:35 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:35.784828 | orchestrator | 2025-08-29 17:51:35 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:35.785914 | orchestrator | 2025-08-29 17:51:35 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:35.785937 | orchestrator | 2025-08-29 17:51:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:38.820370 | orchestrator | 2025-08-29 17:51:38 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:38.822720 | orchestrator | 2025-08-29 17:51:38 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:38.825690 | orchestrator | 2025-08-29 17:51:38 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:38.828336 | orchestrator | 2025-08-29 17:51:38 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:38.828598 | orchestrator | 2025-08-29 17:51:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:41.865185 | orchestrator | 2025-08-29 17:51:41 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:41.866923 | orchestrator | 2025-08-29 17:51:41 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:41.869970 | orchestrator | 2025-08-29 17:51:41 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:41.871480 | orchestrator | 2025-08-29 17:51:41 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:41.871521 | orchestrator | 2025-08-29 17:51:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:44.910604 | orchestrator | 2025-08-29 17:51:44 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:44.912030 | orchestrator | 2025-08-29 17:51:44 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:44.913781 | orchestrator | 2025-08-29 17:51:44 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:44.915636 | orchestrator | 2025-08-29 17:51:44 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:44.916639 | orchestrator | 2025-08-29 17:51:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:47.973122 | orchestrator | 2025-08-29 17:51:47 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:47.976100 | orchestrator | 2025-08-29 17:51:47 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:47.978494 | orchestrator | 2025-08-29 17:51:47 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:47.980343 | orchestrator | 2025-08-29 17:51:47 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:47.980858 | orchestrator | 2025-08-29 17:51:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:51.040518 | orchestrator | 2025-08-29 17:51:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:51.041465 | orchestrator | 2025-08-29 17:51:51 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:51.042627 | orchestrator | 2025-08-29 17:51:51 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:51.044038 | orchestrator | 2025-08-29 17:51:51 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:51.044122 | orchestrator | 2025-08-29 17:51:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:54.085551 | orchestrator | 2025-08-29 17:51:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:54.087696 | orchestrator | 2025-08-29 17:51:54 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:54.089892 | orchestrator | 2025-08-29 17:51:54 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:54.091949 | orchestrator | 2025-08-29 17:51:54 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:54.092056 | orchestrator | 2025-08-29 17:51:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:51:57.148736 | orchestrator | 2025-08-29 17:51:57 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:51:57.148873 | orchestrator | 2025-08-29 17:51:57 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:51:57.148940 | orchestrator | 2025-08-29 17:51:57 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:51:57.148963 | orchestrator | 2025-08-29 17:51:57 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:51:57.148981 | orchestrator | 2025-08-29 17:51:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:00.189591 | orchestrator | 2025-08-29 17:52:00 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:00.197395 | orchestrator | 2025-08-29 17:52:00 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:00.200863 | orchestrator | 2025-08-29 17:52:00 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:00.205298 | orchestrator | 2025-08-29 17:52:00 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:00.205371 | orchestrator | 2025-08-29 17:52:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:03.244979 | orchestrator | 2025-08-29 17:52:03 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:03.245739 | orchestrator | 2025-08-29 17:52:03 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:03.247782 | orchestrator | 2025-08-29 17:52:03 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:03.249157 | orchestrator | 2025-08-29 17:52:03 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:03.249186 | orchestrator | 2025-08-29 17:52:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:06.300849 | orchestrator | 2025-08-29 17:52:06 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:06.302881 | orchestrator | 2025-08-29 17:52:06 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:06.303292 | orchestrator | 2025-08-29 17:52:06 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:06.305601 | orchestrator | 2025-08-29 17:52:06 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:06.305654 | orchestrator | 2025-08-29 17:52:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:09.356928 | orchestrator | 2025-08-29 17:52:09 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:09.359723 | orchestrator | 2025-08-29 17:52:09 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:09.362334 | orchestrator | 2025-08-29 17:52:09 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:09.364979 | orchestrator | 2025-08-29 17:52:09 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:09.365040 | orchestrator | 2025-08-29 17:52:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:12.405178 | orchestrator | 2025-08-29 17:52:12 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:12.406818 | orchestrator | 2025-08-29 17:52:12 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:12.408164 | orchestrator | 2025-08-29 17:52:12 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:12.410488 | orchestrator | 2025-08-29 17:52:12 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:12.410520 | orchestrator | 2025-08-29 17:52:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:15.455337 | orchestrator | 2025-08-29 17:52:15 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:15.457726 | orchestrator | 2025-08-29 17:52:15 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:15.460247 | orchestrator | 2025-08-29 17:52:15 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:15.462908 | orchestrator | 2025-08-29 17:52:15 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:15.462941 | orchestrator | 2025-08-29 17:52:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:18.507806 | orchestrator | 2025-08-29 17:52:18 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:18.508690 | orchestrator | 2025-08-29 17:52:18 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:18.510735 | orchestrator | 2025-08-29 17:52:18 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:18.512030 | orchestrator | 2025-08-29 17:52:18 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:18.512339 | orchestrator | 2025-08-29 17:52:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:21.561337 | orchestrator | 2025-08-29 17:52:21 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:21.561512 | orchestrator | 2025-08-29 17:52:21 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:21.562778 | orchestrator | 2025-08-29 17:52:21 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:21.564604 | orchestrator | 2025-08-29 17:52:21 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:21.564634 | orchestrator | 2025-08-29 17:52:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:24.619944 | orchestrator | 2025-08-29 17:52:24 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:24.622385 | orchestrator | 2025-08-29 17:52:24 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:24.627728 | orchestrator | 2025-08-29 17:52:24 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:24.627757 | orchestrator | 2025-08-29 17:52:24 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:24.627959 | orchestrator | 2025-08-29 17:52:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:27.665147 | orchestrator | 2025-08-29 17:52:27 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:27.665403 | orchestrator | 2025-08-29 17:52:27 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:27.666307 | orchestrator | 2025-08-29 17:52:27 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:27.666807 | orchestrator | 2025-08-29 17:52:27 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:27.666839 | orchestrator | 2025-08-29 17:52:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:30.713314 | orchestrator | 2025-08-29 17:52:30 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:30.715264 | orchestrator | 2025-08-29 17:52:30 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:30.715333 | orchestrator | 2025-08-29 17:52:30 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:30.715348 | orchestrator | 2025-08-29 17:52:30 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:30.715392 | orchestrator | 2025-08-29 17:52:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:33.770984 | orchestrator | 2025-08-29 17:52:33 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:33.772805 | orchestrator | 2025-08-29 17:52:33 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:33.774004 | orchestrator | 2025-08-29 17:52:33 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:33.776336 | orchestrator | 2025-08-29 17:52:33 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:33.776398 | orchestrator | 2025-08-29 17:52:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:36.830765 | orchestrator | 2025-08-29 17:52:36 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:36.832940 | orchestrator | 2025-08-29 17:52:36 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:36.835949 | orchestrator | 2025-08-29 17:52:36 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:36.837553 | orchestrator | 2025-08-29 17:52:36 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:36.837898 | orchestrator | 2025-08-29 17:52:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:39.876204 | orchestrator | 2025-08-29 17:52:39 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:39.879035 | orchestrator | 2025-08-29 17:52:39 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:39.879494 | orchestrator | 2025-08-29 17:52:39 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:39.880276 | orchestrator | 2025-08-29 17:52:39 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:39.880532 | orchestrator | 2025-08-29 17:52:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:42.927592 | orchestrator | 2025-08-29 17:52:42 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:42.927685 | orchestrator | 2025-08-29 17:52:42 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:42.930273 | orchestrator | 2025-08-29 17:52:42 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:42.931272 | orchestrator | 2025-08-29 17:52:42 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:42.931310 | orchestrator | 2025-08-29 17:52:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:45.971098 | orchestrator | 2025-08-29 17:52:45 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:45.973287 | orchestrator | 2025-08-29 17:52:45 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:45.973577 | orchestrator | 2025-08-29 17:52:45 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:45.975266 | orchestrator | 2025-08-29 17:52:45 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:45.975298 | orchestrator | 2025-08-29 17:52:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:49.021849 | orchestrator | 2025-08-29 17:52:49 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:49.022717 | orchestrator | 2025-08-29 17:52:49 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:49.023829 | orchestrator | 2025-08-29 17:52:49 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state STARTED 2025-08-29 17:52:49.025879 | orchestrator | 2025-08-29 17:52:49 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:49.025926 | orchestrator | 2025-08-29 17:52:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:52.105093 | orchestrator | 2025-08-29 17:52:52 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:52.106289 | orchestrator | 2025-08-29 17:52:52 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:52.108004 | orchestrator | 2025-08-29 17:52:52 | INFO  | Task 537b123d-e6a5-4a4b-a875-46902d6706bd is in state SUCCESS 2025-08-29 17:52:52.109062 | orchestrator | 2025-08-29 17:52:52.109097 | orchestrator | 2025-08-29 17:52:52.109108 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 17:52:52.109120 | orchestrator | 2025-08-29 17:52:52.109131 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 17:52:52.109142 | orchestrator | Friday 29 August 2025 17:50:25 +0000 (0:00:00.349) 0:00:00.349 ********* 2025-08-29 17:52:52.109152 | orchestrator | ok: [localhost] => { 2025-08-29 17:52:52.109164 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 17:52:52.109176 | orchestrator | } 2025-08-29 17:52:52.109187 | orchestrator | 2025-08-29 17:52:52.109197 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 17:52:52.109208 | orchestrator | Friday 29 August 2025 17:50:26 +0000 (0:00:00.186) 0:00:00.535 ********* 2025-08-29 17:52:52.109373 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 17:52:52.109415 | orchestrator | ...ignoring 2025-08-29 17:52:52.109427 | orchestrator | 2025-08-29 17:52:52.109438 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 17:52:52.109449 | orchestrator | Friday 29 August 2025 17:50:30 +0000 (0:00:04.246) 0:00:04.782 ********* 2025-08-29 17:52:52.109460 | orchestrator | skipping: [localhost] 2025-08-29 17:52:52.109470 | orchestrator | 2025-08-29 17:52:52.109482 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 17:52:52.109493 | orchestrator | Friday 29 August 2025 17:50:30 +0000 (0:00:00.151) 0:00:04.934 ********* 2025-08-29 17:52:52.109504 | orchestrator | ok: [localhost] 2025-08-29 17:52:52.109514 | orchestrator | 2025-08-29 17:52:52.109525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:52:52.109536 | orchestrator | 2025-08-29 17:52:52.109546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:52:52.109557 | orchestrator | Friday 29 August 2025 17:50:30 +0000 (0:00:00.288) 0:00:05.222 ********* 2025-08-29 17:52:52.109567 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:52:52.109578 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:52:52.109588 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:52:52.109599 | orchestrator | 2025-08-29 17:52:52.109610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:52:52.109620 | orchestrator | Friday 29 August 2025 17:50:31 +0000 (0:00:00.493) 0:00:05.716 ********* 2025-08-29 17:52:52.109631 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 17:52:52.109642 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 17:52:52.109653 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 17:52:52.109663 | orchestrator | 2025-08-29 17:52:52.109674 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 17:52:52.109685 | orchestrator | 2025-08-29 17:52:52.109695 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 17:52:52.109706 | orchestrator | Friday 29 August 2025 17:50:32 +0000 (0:00:00.973) 0:00:06.690 ********* 2025-08-29 17:52:52.109738 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:52:52.109750 | orchestrator | 2025-08-29 17:52:52.109760 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 17:52:52.109771 | orchestrator | Friday 29 August 2025 17:50:32 +0000 (0:00:00.535) 0:00:07.226 ********* 2025-08-29 17:52:52.109781 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:52:52.109792 | orchestrator | 2025-08-29 17:52:52.109802 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 17:52:52.109813 | orchestrator | Friday 29 August 2025 17:50:34 +0000 (0:00:01.413) 0:00:08.639 ********* 2025-08-29 17:52:52.109823 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.109834 | orchestrator | 2025-08-29 17:52:52.109845 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 17:52:52.109855 | orchestrator | Friday 29 August 2025 17:50:34 +0000 (0:00:00.366) 0:00:09.006 ********* 2025-08-29 17:52:52.109866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.109876 | orchestrator | 2025-08-29 17:52:52.109886 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 17:52:52.109897 | orchestrator | Friday 29 August 2025 17:50:35 +0000 (0:00:00.693) 0:00:09.700 ********* 2025-08-29 17:52:52.109907 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.109918 | orchestrator | 2025-08-29 17:52:52.109928 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 17:52:52.109938 | orchestrator | Friday 29 August 2025 17:50:35 +0000 (0:00:00.433) 0:00:10.133 ********* 2025-08-29 17:52:52.109949 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.109959 | orchestrator | 2025-08-29 17:52:52.109970 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 17:52:52.109980 | orchestrator | Friday 29 August 2025 17:50:36 +0000 (0:00:00.458) 0:00:10.592 ********* 2025-08-29 17:52:52.109991 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:52:52.110002 | orchestrator | 2025-08-29 17:52:52.110053 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 17:52:52.110069 | orchestrator | Friday 29 August 2025 17:50:37 +0000 (0:00:00.952) 0:00:11.545 ********* 2025-08-29 17:52:52.110081 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:52:52.110093 | orchestrator | 2025-08-29 17:52:52.110105 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 17:52:52.110128 | orchestrator | Friday 29 August 2025 17:50:37 +0000 (0:00:00.909) 0:00:12.455 ********* 2025-08-29 17:52:52.110140 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.110152 | orchestrator | 2025-08-29 17:52:52.110164 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 17:52:52.110176 | orchestrator | Friday 29 August 2025 17:50:38 +0000 (0:00:00.343) 0:00:12.799 ********* 2025-08-29 17:52:52.110188 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.110200 | orchestrator | 2025-08-29 17:52:52.110221 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 17:52:52.110233 | orchestrator | Friday 29 August 2025 17:50:38 +0000 (0:00:00.412) 0:00:13.211 ********* 2025-08-29 17:52:52.110252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.110276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.110289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.110301 | orchestrator | 2025-08-29 17:52:52.110312 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 17:52:52.110323 | orchestrator | Friday 29 August 2025 17:50:39 +0000 (0:00:01.131) 0:00:14.342 ********* 2025-08-29 17:52:52.110352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.110365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.110384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.110411 | orchestrator | 2025-08-29 17:52:52.110423 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 17:52:52.110433 | orchestrator | Friday 29 August 2025 17:50:45 +0000 (0:00:05.249) 0:00:19.592 ********* 2025-08-29 17:52:52.110444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 17:52:52.110455 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 17:52:52.110483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 17:52:52.110495 | orchestrator | 2025-08-29 17:52:52.110505 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 17:52:52.110516 | orchestrator | Friday 29 August 2025 17:50:47 +0000 (0:00:02.766) 0:00:22.359 ********* 2025-08-29 17:52:52.110527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 17:52:52.110537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 17:52:52.110547 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 17:52:52.110558 | orchestrator | 2025-08-29 17:52:52.110569 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 17:52:52.110580 | orchestrator | Friday 29 August 2025 17:50:50 +0000 (0:00:02.904) 0:00:25.263 ********* 2025-08-29 17:52:52.110596 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 17:52:52.110606 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 17:52:52.110617 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 17:52:52.110628 | orchestrator | 2025-08-29 17:52:52.110645 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 17:52:52.110663 | orchestrator | Friday 29 August 2025 17:50:53 +0000 (0:00:02.779) 0:00:28.042 ********* 2025-08-29 17:52:52.110681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 17:52:52.110699 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 17:52:52.110717 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 17:52:52.110747 | orchestrator | 2025-08-29 17:52:52.110766 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 17:52:52.110784 | orchestrator | Friday 29 August 2025 17:50:55 +0000 (0:00:02.227) 0:00:30.270 ********* 2025-08-29 17:52:52.110802 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 17:52:52.110820 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 17:52:52.110836 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 17:52:52.110853 | orchestrator | 2025-08-29 17:52:52.110871 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 17:52:52.110889 | orchestrator | Friday 29 August 2025 17:50:58 +0000 (0:00:02.342) 0:00:32.612 ********* 2025-08-29 17:52:52.110907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 17:52:52.110926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 17:52:52.110944 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 17:52:52.110964 | orchestrator | 2025-08-29 17:52:52.110982 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 17:52:52.111000 | orchestrator | Friday 29 August 2025 17:51:00 +0000 (0:00:02.298) 0:00:34.910 ********* 2025-08-29 17:52:52.111020 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.111040 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:52:52.111059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:52:52.111078 | orchestrator | 2025-08-29 17:52:52.111096 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 17:52:52.111114 | orchestrator | Friday 29 August 2025 17:51:00 +0000 (0:00:00.418) 0:00:35.329 ********* 2025-08-29 17:52:52.111135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.111168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.111219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:52:52.111242 | orchestrator | 2025-08-29 17:52:52.111260 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 17:52:52.111279 | orchestrator | Friday 29 August 2025 17:51:02 +0000 (0:00:01.676) 0:00:37.005 ********* 2025-08-29 17:52:52.111298 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:52:52.111317 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:52:52.111335 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:52:52.111354 | orchestrator | 2025-08-29 17:52:52.111372 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 17:52:52.111387 | orchestrator | Friday 29 August 2025 17:51:03 +0000 (0:00:01.044) 0:00:38.049 ********* 2025-08-29 17:52:52.111443 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:52:52.111455 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:52:52.111466 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:52:52.111476 | orchestrator | 2025-08-29 17:52:52.111487 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 17:52:52.111498 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:07.916) 0:00:45.966 ********* 2025-08-29 17:52:52.111509 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:52:52.111520 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:52:52.111531 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:52:52.111541 | orchestrator | 2025-08-29 17:52:52.111552 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 17:52:52.111563 | orchestrator | 2025-08-29 17:52:52.111574 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 17:52:52.111585 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.543) 0:00:46.509 ********* 2025-08-29 17:52:52.111596 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:52:52.111606 | orchestrator | 2025-08-29 17:52:52.111617 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 17:52:52.111628 | orchestrator | Friday 29 August 2025 17:51:12 +0000 (0:00:00.608) 0:00:47.118 ********* 2025-08-29 17:52:52.111638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:52:52.111649 | orchestrator | 2025-08-29 17:52:52.111660 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 17:52:52.111670 | orchestrator | Friday 29 August 2025 17:51:13 +0000 (0:00:00.442) 0:00:47.560 ********* 2025-08-29 17:52:52.111681 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:52:52.111701 | orchestrator | 2025-08-29 17:52:52.111712 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 17:52:52.111723 | orchestrator | Friday 29 August 2025 17:51:14 +0000 (0:00:01.641) 0:00:49.202 ********* 2025-08-29 17:52:52.111734 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:52:52.111744 | orchestrator | 2025-08-29 17:52:52.111765 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 17:52:52.111785 | orchestrator | 2025-08-29 17:52:52.111804 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 17:52:52.111824 | orchestrator | Friday 29 August 2025 17:52:07 +0000 (0:00:52.645) 0:01:41.848 ********* 2025-08-29 17:52:52.111844 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:52:52.111865 | orchestrator | 2025-08-29 17:52:52.111886 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 17:52:52.111928 | orchestrator | Friday 29 August 2025 17:52:07 +0000 (0:00:00.603) 0:01:42.451 ********* 2025-08-29 17:52:52.111948 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:52:52.111966 | orchestrator | 2025-08-29 17:52:52.111985 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 17:52:52.112003 | orchestrator | Friday 29 August 2025 17:52:08 +0000 (0:00:00.478) 0:01:42.930 ********* 2025-08-29 17:52:52.112022 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:52:52.112056 | orchestrator | 2025-08-29 17:52:52.112077 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 17:52:52.112096 | orchestrator | Friday 29 August 2025 17:52:10 +0000 (0:00:01.932) 0:01:44.863 ********* 2025-08-29 17:52:52.112115 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:52:52.112134 | orchestrator | 2025-08-29 17:52:52.112152 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 17:52:52.112171 | orchestrator | 2025-08-29 17:52:52.112189 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 17:52:52.112207 | orchestrator | Friday 29 August 2025 17:52:26 +0000 (0:00:16.486) 0:02:01.350 ********* 2025-08-29 17:52:52.112226 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:52:52.112245 | orchestrator | 2025-08-29 17:52:52.112264 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 17:52:52.112284 | orchestrator | Friday 29 August 2025 17:52:27 +0000 (0:00:00.622) 0:02:01.972 ********* 2025-08-29 17:52:52.112303 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:52:52.112323 | orchestrator | 2025-08-29 17:52:52.112343 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 17:52:52.112375 | orchestrator | Friday 29 August 2025 17:52:27 +0000 (0:00:00.297) 0:02:02.270 ********* 2025-08-29 17:52:52.112457 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:52:52.112479 | orchestrator | 2025-08-29 17:52:52.112499 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 17:52:52.112517 | orchestrator | Friday 29 August 2025 17:52:30 +0000 (0:00:02.624) 0:02:04.895 ********* 2025-08-29 17:52:52.112533 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:52:52.112544 | orchestrator | 2025-08-29 17:52:52.112555 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 17:52:52.112565 | orchestrator | 2025-08-29 17:52:52.112576 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 17:52:52.112666 | orchestrator | Friday 29 August 2025 17:52:46 +0000 (0:00:15.647) 0:02:20.543 ********* 2025-08-29 17:52:52.112691 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:52:52.112701 | orchestrator | 2025-08-29 17:52:52.112711 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 17:52:52.112720 | orchestrator | Friday 29 August 2025 17:52:46 +0000 (0:00:00.744) 0:02:21.287 ********* 2025-08-29 17:52:52.112730 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 17:52:52.112739 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 17:52:52.112748 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 17:52:52.112767 | orchestrator | outward_rabbitmq_restart 2025-08-29 17:52:52.112777 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:52:52.112786 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:52:52.112795 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:52:52.112805 | orchestrator | 2025-08-29 17:52:52.112814 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 17:52:52.112823 | orchestrator | skipping: no hosts matched 2025-08-29 17:52:52.112833 | orchestrator | 2025-08-29 17:52:52.112842 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 17:52:52.112851 | orchestrator | skipping: no hosts matched 2025-08-29 17:52:52.112860 | orchestrator | 2025-08-29 17:52:52.112870 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 17:52:52.112879 | orchestrator | skipping: no hosts matched 2025-08-29 17:52:52.112888 | orchestrator | 2025-08-29 17:52:52.112898 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:52:52.112908 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 17:52:52.112917 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 17:52:52.112931 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:52:52.112948 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 17:52:52.112964 | orchestrator | 2025-08-29 17:52:52.112980 | orchestrator | 2025-08-29 17:52:52.112996 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:52:52.113011 | orchestrator | Friday 29 August 2025 17:52:49 +0000 (0:00:02.300) 0:02:23.588 ********* 2025-08-29 17:52:52.113024 | orchestrator | =============================================================================== 2025-08-29 17:52:52.113041 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.78s 2025-08-29 17:52:52.113057 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.92s 2025-08-29 17:52:52.113074 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.20s 2025-08-29 17:52:52.113092 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 5.25s 2025-08-29 17:52:52.113108 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.25s 2025-08-29 17:52:52.113122 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.90s 2025-08-29 17:52:52.113132 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.78s 2025-08-29 17:52:52.113141 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.77s 2025-08-29 17:52:52.113150 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.34s 2025-08-29 17:52:52.113159 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.30s 2025-08-29 17:52:52.113169 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.30s 2025-08-29 17:52:52.113178 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.23s 2025-08-29 17:52:52.113187 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.83s 2025-08-29 17:52:52.113196 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.68s 2025-08-29 17:52:52.113206 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.41s 2025-08-29 17:52:52.113215 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.22s 2025-08-29 17:52:52.113224 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.13s 2025-08-29 17:52:52.113239 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.04s 2025-08-29 17:52:52.113255 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2025-08-29 17:52:52.113264 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.95s 2025-08-29 17:52:52.113283 | orchestrator | 2025-08-29 17:52:52 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:52.113293 | orchestrator | 2025-08-29 17:52:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:55.144714 | orchestrator | 2025-08-29 17:52:55 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:55.145308 | orchestrator | 2025-08-29 17:52:55 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:55.147564 | orchestrator | 2025-08-29 17:52:55 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:55.147626 | orchestrator | 2025-08-29 17:52:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:52:58.186317 | orchestrator | 2025-08-29 17:52:58 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:52:58.186611 | orchestrator | 2025-08-29 17:52:58 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:52:58.188036 | orchestrator | 2025-08-29 17:52:58 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:52:58.188077 | orchestrator | 2025-08-29 17:52:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:01.234861 | orchestrator | 2025-08-29 17:53:01 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:01.236757 | orchestrator | 2025-08-29 17:53:01 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:01.237530 | orchestrator | 2025-08-29 17:53:01 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:01.237573 | orchestrator | 2025-08-29 17:53:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:04.279578 | orchestrator | 2025-08-29 17:53:04 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:04.279858 | orchestrator | 2025-08-29 17:53:04 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:04.279889 | orchestrator | 2025-08-29 17:53:04 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:04.279901 | orchestrator | 2025-08-29 17:53:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:07.332477 | orchestrator | 2025-08-29 17:53:07 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:07.332712 | orchestrator | 2025-08-29 17:53:07 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:07.334184 | orchestrator | 2025-08-29 17:53:07 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:07.334212 | orchestrator | 2025-08-29 17:53:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:10.381127 | orchestrator | 2025-08-29 17:53:10 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:10.384286 | orchestrator | 2025-08-29 17:53:10 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:10.388027 | orchestrator | 2025-08-29 17:53:10 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:10.388090 | orchestrator | 2025-08-29 17:53:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:13.431551 | orchestrator | 2025-08-29 17:53:13 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:13.432711 | orchestrator | 2025-08-29 17:53:13 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:13.434170 | orchestrator | 2025-08-29 17:53:13 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:13.434206 | orchestrator | 2025-08-29 17:53:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:16.472938 | orchestrator | 2025-08-29 17:53:16 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:16.475626 | orchestrator | 2025-08-29 17:53:16 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:16.477886 | orchestrator | 2025-08-29 17:53:16 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:16.477924 | orchestrator | 2025-08-29 17:53:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:19.510264 | orchestrator | 2025-08-29 17:53:19 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:19.510339 | orchestrator | 2025-08-29 17:53:19 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:19.510678 | orchestrator | 2025-08-29 17:53:19 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:19.510693 | orchestrator | 2025-08-29 17:53:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:22.552091 | orchestrator | 2025-08-29 17:53:22 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:22.552200 | orchestrator | 2025-08-29 17:53:22 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:22.553479 | orchestrator | 2025-08-29 17:53:22 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:22.553792 | orchestrator | 2025-08-29 17:53:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:25.667522 | orchestrator | 2025-08-29 17:53:25 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:25.668586 | orchestrator | 2025-08-29 17:53:25 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:25.668638 | orchestrator | 2025-08-29 17:53:25 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:25.668653 | orchestrator | 2025-08-29 17:53:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:28.702578 | orchestrator | 2025-08-29 17:53:28 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:28.702695 | orchestrator | 2025-08-29 17:53:28 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:28.703601 | orchestrator | 2025-08-29 17:53:28 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:28.703626 | orchestrator | 2025-08-29 17:53:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:31.752589 | orchestrator | 2025-08-29 17:53:31 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:31.755695 | orchestrator | 2025-08-29 17:53:31 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:31.757164 | orchestrator | 2025-08-29 17:53:31 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:31.757348 | orchestrator | 2025-08-29 17:53:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:34.804111 | orchestrator | 2025-08-29 17:53:34 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:34.804221 | orchestrator | 2025-08-29 17:53:34 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:34.805469 | orchestrator | 2025-08-29 17:53:34 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:34.805507 | orchestrator | 2025-08-29 17:53:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:37.853709 | orchestrator | 2025-08-29 17:53:37 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:37.854693 | orchestrator | 2025-08-29 17:53:37 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:37.857238 | orchestrator | 2025-08-29 17:53:37 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:37.857319 | orchestrator | 2025-08-29 17:53:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:40.912890 | orchestrator | 2025-08-29 17:53:40 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:40.915135 | orchestrator | 2025-08-29 17:53:40 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:40.917719 | orchestrator | 2025-08-29 17:53:40 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:40.917774 | orchestrator | 2025-08-29 17:53:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:43.959878 | orchestrator | 2025-08-29 17:53:43 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:43.962705 | orchestrator | 2025-08-29 17:53:43 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:43.964846 | orchestrator | 2025-08-29 17:53:43 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:43.964877 | orchestrator | 2025-08-29 17:53:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:47.023627 | orchestrator | 2025-08-29 17:53:47 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:47.025706 | orchestrator | 2025-08-29 17:53:47 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:47.026936 | orchestrator | 2025-08-29 17:53:47 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:47.027175 | orchestrator | 2025-08-29 17:53:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:50.080849 | orchestrator | 2025-08-29 17:53:50 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:50.081629 | orchestrator | 2025-08-29 17:53:50 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:50.082723 | orchestrator | 2025-08-29 17:53:50 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:50.082771 | orchestrator | 2025-08-29 17:53:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:53.117550 | orchestrator | 2025-08-29 17:53:53 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:53.119610 | orchestrator | 2025-08-29 17:53:53 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state STARTED 2025-08-29 17:53:53.121899 | orchestrator | 2025-08-29 17:53:53 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:53.122266 | orchestrator | 2025-08-29 17:53:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:56.166324 | orchestrator | 2025-08-29 17:53:56 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:56.168339 | orchestrator | 2025-08-29 17:53:56.168416 | orchestrator | 2025-08-29 17:53:56 | INFO  | Task 71e47993-3ee0-4dac-963b-c47e32191306 is in state SUCCESS 2025-08-29 17:53:56.171349 | orchestrator | 2025-08-29 17:53:56.171421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:53:56.171434 | orchestrator | 2025-08-29 17:53:56.171446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:53:56.171458 | orchestrator | Friday 29 August 2025 17:51:18 +0000 (0:00:00.174) 0:00:00.175 ********* 2025-08-29 17:53:56.171470 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:53:56.171482 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:53:56.171493 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:53:56.171503 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.171514 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.171524 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.171535 | orchestrator | 2025-08-29 17:53:56.171546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:53:56.171557 | orchestrator | Friday 29 August 2025 17:51:20 +0000 (0:00:01.116) 0:00:01.291 ********* 2025-08-29 17:53:56.171568 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 17:53:56.171580 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 17:53:56.171590 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 17:53:56.171601 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 17:53:56.171612 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 17:53:56.171623 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 17:53:56.171633 | orchestrator | 2025-08-29 17:53:56.171644 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 17:53:56.171655 | orchestrator | 2025-08-29 17:53:56.171666 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 17:53:56.171677 | orchestrator | Friday 29 August 2025 17:51:20 +0000 (0:00:00.880) 0:00:02.172 ********* 2025-08-29 17:53:56.171689 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:53:56.171702 | orchestrator | 2025-08-29 17:53:56.171712 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 17:53:56.171723 | orchestrator | Friday 29 August 2025 17:51:22 +0000 (0:00:01.108) 0:00:03.280 ********* 2025-08-29 17:53:56.171736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171844 | orchestrator | 2025-08-29 17:53:56.171855 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 17:53:56.171866 | orchestrator | Friday 29 August 2025 17:51:23 +0000 (0:00:01.435) 0:00:04.715 ********* 2025-08-29 17:53:56.171877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.171959 | orchestrator | 2025-08-29 17:53:56.171972 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 17:53:56.171984 | orchestrator | Friday 29 August 2025 17:51:25 +0000 (0:00:02.095) 0:00:06.811 ********* 2025-08-29 17:53:56.171997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172079 | orchestrator | 2025-08-29 17:53:56.172089 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 17:53:56.172100 | orchestrator | Friday 29 August 2025 17:51:26 +0000 (0:00:01.248) 0:00:08.060 ********* 2025-08-29 17:53:56.172111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172198 | orchestrator | 2025-08-29 17:53:56.172209 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 17:53:56.172220 | orchestrator | Friday 29 August 2025 17:51:28 +0000 (0:00:02.119) 0:00:10.179 ********* 2025-08-29 17:53:56.172231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.172310 | orchestrator | 2025-08-29 17:53:56.172321 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 17:53:56.172332 | orchestrator | Friday 29 August 2025 17:51:30 +0000 (0:00:01.683) 0:00:11.863 ********* 2025-08-29 17:53:56.172342 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:53:56.172353 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:53:56.172379 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:53:56.172391 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.172403 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.172421 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.172440 | orchestrator | 2025-08-29 17:53:56.172459 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 17:53:56.172481 | orchestrator | Friday 29 August 2025 17:51:33 +0000 (0:00:02.531) 0:00:14.394 ********* 2025-08-29 17:53:56.172511 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 17:53:56.172531 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 17:53:56.172550 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 17:53:56.172579 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:53:56.172600 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 17:53:56.172620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 17:53:56.172640 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 17:53:56.172659 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:53:56.172678 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:53:56.172698 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:53:56.172721 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:53:56.172733 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:53:56.172744 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 17:53:56.172754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:53:56.172765 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:53:56.172786 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:53:56.172798 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:53:56.172809 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:53:56.172819 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:53:56.172830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 17:53:56.172840 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:53:56.172851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:53:56.172862 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:53:56.172872 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:53:56.172883 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:53:56.172893 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 17:53:56.172911 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:53:56.172922 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:53:56.172932 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:53:56.172948 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:53:56.172965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:53:56.172992 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 17:53:56.173013 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 17:53:56.173030 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:53:56.173048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:53:56.173067 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 17:53:56.173078 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:53:56.173089 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 17:53:56.173099 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 17:53:56.173110 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 17:53:56.173130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 17:53:56.173141 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 17:53:56.173152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 17:53:56.173162 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 17:53:56.173184 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 17:53:56.173195 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 17:53:56.173206 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 17:53:56.173216 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 17:53:56.173227 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 17:53:56.173237 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 17:53:56.173248 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 17:53:56.173259 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 17:53:56.173269 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 17:53:56.173280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 17:53:56.173290 | orchestrator | 2025-08-29 17:53:56.173301 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:53:56.173312 | orchestrator | Friday 29 August 2025 17:51:52 +0000 (0:00:19.426) 0:00:33.820 ********* 2025-08-29 17:53:56.173323 | orchestrator | 2025-08-29 17:53:56.173333 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:53:56.173344 | orchestrator | Friday 29 August 2025 17:51:52 +0000 (0:00:00.387) 0:00:34.208 ********* 2025-08-29 17:53:56.173355 | orchestrator | 2025-08-29 17:53:56.173442 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:53:56.173458 | orchestrator | Friday 29 August 2025 17:51:53 +0000 (0:00:00.073) 0:00:34.281 ********* 2025-08-29 17:53:56.173468 | orchestrator | 2025-08-29 17:53:56.173479 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:53:56.173489 | orchestrator | Friday 29 August 2025 17:51:53 +0000 (0:00:00.084) 0:00:34.366 ********* 2025-08-29 17:53:56.173500 | orchestrator | 2025-08-29 17:53:56.173511 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:53:56.173528 | orchestrator | Friday 29 August 2025 17:51:53 +0000 (0:00:00.070) 0:00:34.437 ********* 2025-08-29 17:53:56.173539 | orchestrator | 2025-08-29 17:53:56.173550 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 17:53:56.173560 | orchestrator | Friday 29 August 2025 17:51:53 +0000 (0:00:00.074) 0:00:34.512 ********* 2025-08-29 17:53:56.173571 | orchestrator | 2025-08-29 17:53:56.173581 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 17:53:56.173592 | orchestrator | Friday 29 August 2025 17:51:53 +0000 (0:00:00.066) 0:00:34.579 ********* 2025-08-29 17:53:56.173602 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:53:56.173614 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:53:56.173624 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:53:56.173635 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.173650 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.173674 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.173701 | orchestrator | 2025-08-29 17:53:56.173721 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 17:53:56.173741 | orchestrator | Friday 29 August 2025 17:51:54 +0000 (0:00:01.556) 0:00:36.135 ********* 2025-08-29 17:53:56.173760 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.173796 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:53:56.173815 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:53:56.173835 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.173856 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:53:56.173877 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.173898 | orchestrator | 2025-08-29 17:53:56.173920 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 17:53:56.173941 | orchestrator | 2025-08-29 17:53:56.173962 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 17:53:56.173989 | orchestrator | Friday 29 August 2025 17:52:37 +0000 (0:00:42.950) 0:01:19.086 ********* 2025-08-29 17:53:56.174012 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:53:56.174118 | orchestrator | 2025-08-29 17:53:56.174138 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 17:53:56.174157 | orchestrator | Friday 29 August 2025 17:52:38 +0000 (0:00:01.053) 0:01:20.139 ********* 2025-08-29 17:53:56.174174 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:53:56.174186 | orchestrator | 2025-08-29 17:53:56.174209 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 17:53:56.174220 | orchestrator | Friday 29 August 2025 17:52:39 +0000 (0:00:00.591) 0:01:20.730 ********* 2025-08-29 17:53:56.174231 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.174241 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.174252 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.174262 | orchestrator | 2025-08-29 17:53:56.174273 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 17:53:56.174289 | orchestrator | Friday 29 August 2025 17:52:40 +0000 (0:00:01.150) 0:01:21.881 ********* 2025-08-29 17:53:56.174314 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.174337 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.174355 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.174399 | orchestrator | 2025-08-29 17:53:56.174415 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 17:53:56.174432 | orchestrator | Friday 29 August 2025 17:52:41 +0000 (0:00:00.424) 0:01:22.305 ********* 2025-08-29 17:53:56.174448 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.174464 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.174482 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.174500 | orchestrator | 2025-08-29 17:53:56.174518 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 17:53:56.174536 | orchestrator | Friday 29 August 2025 17:52:41 +0000 (0:00:00.554) 0:01:22.860 ********* 2025-08-29 17:53:56.174554 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.174571 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.174590 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.174606 | orchestrator | 2025-08-29 17:53:56.174626 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 17:53:56.174645 | orchestrator | Friday 29 August 2025 17:52:41 +0000 (0:00:00.340) 0:01:23.201 ********* 2025-08-29 17:53:56.174664 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.174677 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.174688 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.174698 | orchestrator | 2025-08-29 17:53:56.174709 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 17:53:56.174720 | orchestrator | Friday 29 August 2025 17:52:42 +0000 (0:00:00.598) 0:01:23.800 ********* 2025-08-29 17:53:56.174730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.174741 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.174751 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.174762 | orchestrator | 2025-08-29 17:53:56.174772 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 17:53:56.174783 | orchestrator | Friday 29 August 2025 17:52:42 +0000 (0:00:00.348) 0:01:24.148 ********* 2025-08-29 17:53:56.174807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.174817 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.174828 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.174839 | orchestrator | 2025-08-29 17:53:56.174850 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 17:53:56.174861 | orchestrator | Friday 29 August 2025 17:52:43 +0000 (0:00:00.333) 0:01:24.481 ********* 2025-08-29 17:53:56.174871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.174882 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.174893 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.174903 | orchestrator | 2025-08-29 17:53:56.174914 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 17:53:56.174924 | orchestrator | Friday 29 August 2025 17:52:43 +0000 (0:00:00.306) 0:01:24.788 ********* 2025-08-29 17:53:56.174935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.174945 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.174956 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.174966 | orchestrator | 2025-08-29 17:53:56.174977 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 17:53:56.175001 | orchestrator | Friday 29 August 2025 17:52:44 +0000 (0:00:00.550) 0:01:25.338 ********* 2025-08-29 17:53:56.175012 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175023 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175033 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175044 | orchestrator | 2025-08-29 17:53:56.175055 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 17:53:56.175065 | orchestrator | Friday 29 August 2025 17:52:44 +0000 (0:00:00.325) 0:01:25.664 ********* 2025-08-29 17:53:56.175076 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175086 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175107 | orchestrator | 2025-08-29 17:53:56.175118 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 17:53:56.175128 | orchestrator | Friday 29 August 2025 17:52:44 +0000 (0:00:00.335) 0:01:25.999 ********* 2025-08-29 17:53:56.175139 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175149 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175160 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175170 | orchestrator | 2025-08-29 17:53:56.175181 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 17:53:56.175191 | orchestrator | Friday 29 August 2025 17:52:45 +0000 (0:00:00.331) 0:01:26.331 ********* 2025-08-29 17:53:56.175202 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175223 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175234 | orchestrator | 2025-08-29 17:53:56.175244 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 17:53:56.175255 | orchestrator | Friday 29 August 2025 17:52:45 +0000 (0:00:00.668) 0:01:27.000 ********* 2025-08-29 17:53:56.175265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175276 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175286 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175297 | orchestrator | 2025-08-29 17:53:56.175308 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 17:53:56.175318 | orchestrator | Friday 29 August 2025 17:52:46 +0000 (0:00:00.327) 0:01:27.327 ********* 2025-08-29 17:53:56.175329 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175340 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175350 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175361 | orchestrator | 2025-08-29 17:53:56.175422 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 17:53:56.175448 | orchestrator | Friday 29 August 2025 17:52:46 +0000 (0:00:00.335) 0:01:27.663 ********* 2025-08-29 17:53:56.175479 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175497 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175530 | orchestrator | 2025-08-29 17:53:56.175546 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 17:53:56.175560 | orchestrator | Friday 29 August 2025 17:52:46 +0000 (0:00:00.362) 0:01:28.025 ********* 2025-08-29 17:53:56.175575 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175591 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175609 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175624 | orchestrator | 2025-08-29 17:53:56.175642 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 17:53:56.175660 | orchestrator | Friday 29 August 2025 17:52:47 +0000 (0:00:00.576) 0:01:28.602 ********* 2025-08-29 17:53:56.175677 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:53:56.175696 | orchestrator | 2025-08-29 17:53:56.175716 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 17:53:56.175735 | orchestrator | Friday 29 August 2025 17:52:48 +0000 (0:00:00.790) 0:01:29.392 ********* 2025-08-29 17:53:56.175752 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.175769 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.175780 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.175790 | orchestrator | 2025-08-29 17:53:56.175801 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 17:53:56.175811 | orchestrator | Friday 29 August 2025 17:52:48 +0000 (0:00:00.477) 0:01:29.870 ********* 2025-08-29 17:53:56.175822 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.175832 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.175843 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.175853 | orchestrator | 2025-08-29 17:53:56.175864 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 17:53:56.175874 | orchestrator | Friday 29 August 2025 17:52:49 +0000 (0:00:00.932) 0:01:30.803 ********* 2025-08-29 17:53:56.175885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175896 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.175906 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.175916 | orchestrator | 2025-08-29 17:53:56.175930 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 17:53:56.175953 | orchestrator | Friday 29 August 2025 17:52:50 +0000 (0:00:00.650) 0:01:31.453 ********* 2025-08-29 17:53:56.175979 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.175997 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.176015 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.176031 | orchestrator | 2025-08-29 17:53:56.176051 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 17:53:56.176069 | orchestrator | Friday 29 August 2025 17:52:50 +0000 (0:00:00.545) 0:01:31.999 ********* 2025-08-29 17:53:56.176087 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.176107 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.176124 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.176143 | orchestrator | 2025-08-29 17:53:56.176154 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 17:53:56.176165 | orchestrator | Friday 29 August 2025 17:52:51 +0000 (0:00:00.600) 0:01:32.600 ********* 2025-08-29 17:53:56.176175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.176186 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.176196 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.176207 | orchestrator | 2025-08-29 17:53:56.176218 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 17:53:56.176237 | orchestrator | Friday 29 August 2025 17:52:52 +0000 (0:00:00.720) 0:01:33.320 ********* 2025-08-29 17:53:56.176248 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.176258 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.176279 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.176290 | orchestrator | 2025-08-29 17:53:56.176301 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 17:53:56.176311 | orchestrator | Friday 29 August 2025 17:52:52 +0000 (0:00:00.362) 0:01:33.682 ********* 2025-08-29 17:53:56.176322 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.176332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.176343 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.176353 | orchestrator | 2025-08-29 17:53:56.176364 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 17:53:56.176401 | orchestrator | Friday 29 August 2025 17:52:52 +0000 (0:00:00.334) 0:01:34.017 ********* 2025-08-29 17:53:56.176414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176545 | orchestrator | 2025-08-29 17:53:56.176557 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 17:53:56.176567 | orchestrator | Friday 29 August 2025 17:52:54 +0000 (0:00:01.406) 0:01:35.423 ********* 2025-08-29 17:53:56.176578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176743 | orchestrator | 2025-08-29 17:53:56.176760 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 17:53:56.176770 | orchestrator | Friday 29 August 2025 17:52:59 +0000 (0:00:04.934) 0:01:40.358 ********* 2025-08-29 17:53:56.176781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.176896 | orchestrator | 2025-08-29 17:53:56.176907 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:53:56.176918 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:02.030) 0:01:42.388 ********* 2025-08-29 17:53:56.176928 | orchestrator | 2025-08-29 17:53:56.176939 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:53:56.176959 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:00.069) 0:01:42.458 ********* 2025-08-29 17:53:56.176990 | orchestrator | 2025-08-29 17:53:56.177009 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:53:56.177028 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:00.118) 0:01:42.576 ********* 2025-08-29 17:53:56.177047 | orchestrator | 2025-08-29 17:53:56.177065 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 17:53:56.177079 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:00.125) 0:01:42.701 ********* 2025-08-29 17:53:56.177090 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.177108 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.177135 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.177156 | orchestrator | 2025-08-29 17:53:56.177173 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 17:53:56.177191 | orchestrator | Friday 29 August 2025 17:53:09 +0000 (0:00:07.752) 0:01:50.454 ********* 2025-08-29 17:53:56.177209 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.177226 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.177242 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.177259 | orchestrator | 2025-08-29 17:53:56.177278 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 17:53:56.177296 | orchestrator | Friday 29 August 2025 17:53:11 +0000 (0:00:02.744) 0:01:53.198 ********* 2025-08-29 17:53:56.177316 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.177334 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.177353 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.177396 | orchestrator | 2025-08-29 17:53:56.177418 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 17:53:56.177431 | orchestrator | Friday 29 August 2025 17:53:14 +0000 (0:00:02.885) 0:01:56.084 ********* 2025-08-29 17:53:56.177441 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.177451 | orchestrator | 2025-08-29 17:53:56.177462 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 17:53:56.177473 | orchestrator | Friday 29 August 2025 17:53:15 +0000 (0:00:00.248) 0:01:56.333 ********* 2025-08-29 17:53:56.177483 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.177494 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.177504 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.177514 | orchestrator | 2025-08-29 17:53:56.177535 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 17:53:56.177546 | orchestrator | Friday 29 August 2025 17:53:16 +0000 (0:00:01.019) 0:01:57.353 ********* 2025-08-29 17:53:56.177556 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.177567 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.177577 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.177599 | orchestrator | 2025-08-29 17:53:56.177609 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 17:53:56.177620 | orchestrator | Friday 29 August 2025 17:53:16 +0000 (0:00:00.585) 0:01:57.938 ********* 2025-08-29 17:53:56.177630 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.177641 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.177651 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.177662 | orchestrator | 2025-08-29 17:53:56.177672 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 17:53:56.177683 | orchestrator | Friday 29 August 2025 17:53:17 +0000 (0:00:01.066) 0:01:59.005 ********* 2025-08-29 17:53:56.177694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.177705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.177715 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.177726 | orchestrator | 2025-08-29 17:53:56.177736 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 17:53:56.177747 | orchestrator | Friday 29 August 2025 17:53:18 +0000 (0:00:00.656) 0:01:59.661 ********* 2025-08-29 17:53:56.177757 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.177768 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.177778 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.177789 | orchestrator | 2025-08-29 17:53:56.177799 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 17:53:56.177810 | orchestrator | Friday 29 August 2025 17:53:19 +0000 (0:00:00.939) 0:02:00.601 ********* 2025-08-29 17:53:56.177820 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.177831 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.177841 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.177852 | orchestrator | 2025-08-29 17:53:56.177862 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 17:53:56.177873 | orchestrator | Friday 29 August 2025 17:53:20 +0000 (0:00:00.804) 0:02:01.405 ********* 2025-08-29 17:53:56.177883 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.177893 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.177904 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.177914 | orchestrator | 2025-08-29 17:53:56.177925 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 17:53:56.177936 | orchestrator | Friday 29 August 2025 17:53:20 +0000 (0:00:00.655) 0:02:02.060 ********* 2025-08-29 17:53:56.177947 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.177958 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.177970 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.177981 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.177999 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178078 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178102 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178114 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178136 | orchestrator | 2025-08-29 17:53:56.178146 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 17:53:56.178157 | orchestrator | Friday 29 August 2025 17:53:22 +0000 (0:00:01.445) 0:02:03.506 ********* 2025-08-29 17:53:56.178168 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178179 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178190 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178206 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178296 | orchestrator | 2025-08-29 17:53:56.178306 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 17:53:56.178317 | orchestrator | Friday 29 August 2025 17:53:26 +0000 (0:00:04.112) 0:02:07.618 ********* 2025-08-29 17:53:56.178328 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178339 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178426 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 17:53:56.178490 | orchestrator | 2025-08-29 17:53:56.178500 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:53:56.178511 | orchestrator | Friday 29 August 2025 17:53:29 +0000 (0:00:03.029) 0:02:10.648 ********* 2025-08-29 17:53:56.178521 | orchestrator | 2025-08-29 17:53:56.178532 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:53:56.178542 | orchestrator | Friday 29 August 2025 17:53:29 +0000 (0:00:00.085) 0:02:10.733 ********* 2025-08-29 17:53:56.178553 | orchestrator | 2025-08-29 17:53:56.178563 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 17:53:56.178574 | orchestrator | Friday 29 August 2025 17:53:29 +0000 (0:00:00.331) 0:02:11.065 ********* 2025-08-29 17:53:56.178584 | orchestrator | 2025-08-29 17:53:56.178595 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 17:53:56.178605 | orchestrator | Friday 29 August 2025 17:53:29 +0000 (0:00:00.122) 0:02:11.188 ********* 2025-08-29 17:53:56.178616 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.178626 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.178637 | orchestrator | 2025-08-29 17:53:56.178647 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 17:53:56.178658 | orchestrator | Friday 29 August 2025 17:53:36 +0000 (0:00:06.709) 0:02:17.897 ********* 2025-08-29 17:53:56.178668 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.178679 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.178689 | orchestrator | 2025-08-29 17:53:56.178700 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 17:53:56.178710 | orchestrator | Friday 29 August 2025 17:53:43 +0000 (0:00:06.345) 0:02:24.242 ********* 2025-08-29 17:53:56.178727 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:53:56.178738 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:53:56.178748 | orchestrator | 2025-08-29 17:53:56.178759 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 17:53:56.178770 | orchestrator | Friday 29 August 2025 17:53:49 +0000 (0:00:06.329) 0:02:30.572 ********* 2025-08-29 17:53:56.178781 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:53:56.178791 | orchestrator | 2025-08-29 17:53:56.178802 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 17:53:56.178818 | orchestrator | Friday 29 August 2025 17:53:49 +0000 (0:00:00.152) 0:02:30.724 ********* 2025-08-29 17:53:56.178838 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.178858 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.178878 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.178898 | orchestrator | 2025-08-29 17:53:56.178916 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 17:53:56.178936 | orchestrator | Friday 29 August 2025 17:53:50 +0000 (0:00:00.788) 0:02:31.512 ********* 2025-08-29 17:53:56.178957 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.178988 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.179008 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.179026 | orchestrator | 2025-08-29 17:53:56.179038 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 17:53:56.179048 | orchestrator | Friday 29 August 2025 17:53:51 +0000 (0:00:00.738) 0:02:32.251 ********* 2025-08-29 17:53:56.179059 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.179069 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.179080 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.179090 | orchestrator | 2025-08-29 17:53:56.179101 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 17:53:56.179111 | orchestrator | Friday 29 August 2025 17:53:51 +0000 (0:00:00.773) 0:02:33.024 ********* 2025-08-29 17:53:56.179122 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:53:56.179132 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:53:56.179143 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:53:56.179153 | orchestrator | 2025-08-29 17:53:56.179164 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 17:53:56.179176 | orchestrator | Friday 29 August 2025 17:53:52 +0000 (0:00:00.590) 0:02:33.615 ********* 2025-08-29 17:53:56.179194 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.179220 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.179242 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.179259 | orchestrator | 2025-08-29 17:53:56.179276 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 17:53:56.179293 | orchestrator | Friday 29 August 2025 17:53:53 +0000 (0:00:00.650) 0:02:34.266 ********* 2025-08-29 17:53:56.179309 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:53:56.179325 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:53:56.179341 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:53:56.179357 | orchestrator | 2025-08-29 17:53:56.179402 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:53:56.179421 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 17:53:56.179441 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 17:53:56.179472 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 17:53:56.179491 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:53:56.179509 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:53:56.179543 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 17:53:56.179561 | orchestrator | 2025-08-29 17:53:56.179580 | orchestrator | 2025-08-29 17:53:56.179594 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:53:56.179605 | orchestrator | Friday 29 August 2025 17:53:54 +0000 (0:00:01.095) 0:02:35.362 ********* 2025-08-29 17:53:56.179615 | orchestrator | =============================================================================== 2025-08-29 17:53:56.179626 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 42.95s 2025-08-29 17:53:56.179636 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.43s 2025-08-29 17:53:56.179647 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.46s 2025-08-29 17:53:56.179658 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.22s 2025-08-29 17:53:56.179669 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.09s 2025-08-29 17:53:56.179679 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.93s 2025-08-29 17:53:56.179690 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.11s 2025-08-29 17:53:56.179700 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.03s 2025-08-29 17:53:56.179711 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.53s 2025-08-29 17:53:56.179721 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.12s 2025-08-29 17:53:56.179732 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.10s 2025-08-29 17:53:56.179742 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.03s 2025-08-29 17:53:56.179753 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.68s 2025-08-29 17:53:56.179764 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.56s 2025-08-29 17:53:56.179774 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-08-29 17:53:56.179785 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.44s 2025-08-29 17:53:56.179795 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-08-29 17:53:56.179806 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.25s 2025-08-29 17:53:56.179816 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.15s 2025-08-29 17:53:56.179826 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.12s 2025-08-29 17:53:56.179844 | orchestrator | 2025-08-29 17:53:56 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:56.179855 | orchestrator | 2025-08-29 17:53:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:53:59.212343 | orchestrator | 2025-08-29 17:53:59 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:53:59.214538 | orchestrator | 2025-08-29 17:53:59 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:53:59.214761 | orchestrator | 2025-08-29 17:53:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:02.264925 | orchestrator | 2025-08-29 17:54:02 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:02.265786 | orchestrator | 2025-08-29 17:54:02 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:02.265830 | orchestrator | 2025-08-29 17:54:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:05.319299 | orchestrator | 2025-08-29 17:54:05 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:05.319707 | orchestrator | 2025-08-29 17:54:05 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:05.319739 | orchestrator | 2025-08-29 17:54:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:08.370645 | orchestrator | 2025-08-29 17:54:08 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:08.371781 | orchestrator | 2025-08-29 17:54:08 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:08.371801 | orchestrator | 2025-08-29 17:54:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:11.423126 | orchestrator | 2025-08-29 17:54:11 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:11.424729 | orchestrator | 2025-08-29 17:54:11 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:11.424878 | orchestrator | 2025-08-29 17:54:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:14.476693 | orchestrator | 2025-08-29 17:54:14 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:14.478815 | orchestrator | 2025-08-29 17:54:14 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:14.478865 | orchestrator | 2025-08-29 17:54:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:17.525866 | orchestrator | 2025-08-29 17:54:17 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:17.527958 | orchestrator | 2025-08-29 17:54:17 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:17.528001 | orchestrator | 2025-08-29 17:54:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:20.571463 | orchestrator | 2025-08-29 17:54:20 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:20.575496 | orchestrator | 2025-08-29 17:54:20 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:20.578887 | orchestrator | 2025-08-29 17:54:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:23.618317 | orchestrator | 2025-08-29 17:54:23 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:23.618531 | orchestrator | 2025-08-29 17:54:23 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:23.618549 | orchestrator | 2025-08-29 17:54:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:26.669652 | orchestrator | 2025-08-29 17:54:26 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:26.669995 | orchestrator | 2025-08-29 17:54:26 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:26.670115 | orchestrator | 2025-08-29 17:54:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:29.712432 | orchestrator | 2025-08-29 17:54:29 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:29.713168 | orchestrator | 2025-08-29 17:54:29 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:29.713319 | orchestrator | 2025-08-29 17:54:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:32.757655 | orchestrator | 2025-08-29 17:54:32 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:32.759102 | orchestrator | 2025-08-29 17:54:32 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:32.759166 | orchestrator | 2025-08-29 17:54:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:35.805823 | orchestrator | 2025-08-29 17:54:35 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:35.807294 | orchestrator | 2025-08-29 17:54:35 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:35.808131 | orchestrator | 2025-08-29 17:54:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:38.856629 | orchestrator | 2025-08-29 17:54:38 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:38.857731 | orchestrator | 2025-08-29 17:54:38 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:38.857771 | orchestrator | 2025-08-29 17:54:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:41.900823 | orchestrator | 2025-08-29 17:54:41 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:41.901741 | orchestrator | 2025-08-29 17:54:41 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:41.901841 | orchestrator | 2025-08-29 17:54:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:44.948518 | orchestrator | 2025-08-29 17:54:44 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:44.949561 | orchestrator | 2025-08-29 17:54:44 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:44.949608 | orchestrator | 2025-08-29 17:54:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:47.988274 | orchestrator | 2025-08-29 17:54:47 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:47.990000 | orchestrator | 2025-08-29 17:54:47 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:47.990209 | orchestrator | 2025-08-29 17:54:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:51.032604 | orchestrator | 2025-08-29 17:54:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:51.034287 | orchestrator | 2025-08-29 17:54:51 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:51.034662 | orchestrator | 2025-08-29 17:54:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:54.075663 | orchestrator | 2025-08-29 17:54:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:54.075753 | orchestrator | 2025-08-29 17:54:54 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:54.075770 | orchestrator | 2025-08-29 17:54:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:54:57.112469 | orchestrator | 2025-08-29 17:54:57 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:54:57.113631 | orchestrator | 2025-08-29 17:54:57 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:54:57.113665 | orchestrator | 2025-08-29 17:54:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:00.160948 | orchestrator | 2025-08-29 17:55:00 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:00.163383 | orchestrator | 2025-08-29 17:55:00 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:00.163545 | orchestrator | 2025-08-29 17:55:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:03.207894 | orchestrator | 2025-08-29 17:55:03 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:03.208019 | orchestrator | 2025-08-29 17:55:03 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:03.208077 | orchestrator | 2025-08-29 17:55:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:06.251705 | orchestrator | 2025-08-29 17:55:06 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:06.252934 | orchestrator | 2025-08-29 17:55:06 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:06.252976 | orchestrator | 2025-08-29 17:55:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:09.294428 | orchestrator | 2025-08-29 17:55:09 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:09.295716 | orchestrator | 2025-08-29 17:55:09 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:09.295752 | orchestrator | 2025-08-29 17:55:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:12.336489 | orchestrator | 2025-08-29 17:55:12 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:12.338606 | orchestrator | 2025-08-29 17:55:12 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:12.339115 | orchestrator | 2025-08-29 17:55:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:15.382964 | orchestrator | 2025-08-29 17:55:15 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:15.384972 | orchestrator | 2025-08-29 17:55:15 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:15.385040 | orchestrator | 2025-08-29 17:55:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:18.428464 | orchestrator | 2025-08-29 17:55:18 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:18.429324 | orchestrator | 2025-08-29 17:55:18 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:18.429368 | orchestrator | 2025-08-29 17:55:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:21.467315 | orchestrator | 2025-08-29 17:55:21 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:21.467841 | orchestrator | 2025-08-29 17:55:21 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:21.467888 | orchestrator | 2025-08-29 17:55:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:24.514441 | orchestrator | 2025-08-29 17:55:24 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:24.514787 | orchestrator | 2025-08-29 17:55:24 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:24.514821 | orchestrator | 2025-08-29 17:55:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:27.556234 | orchestrator | 2025-08-29 17:55:27 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:27.558772 | orchestrator | 2025-08-29 17:55:27 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:27.558814 | orchestrator | 2025-08-29 17:55:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:30.611674 | orchestrator | 2025-08-29 17:55:30 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:30.612682 | orchestrator | 2025-08-29 17:55:30 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:30.612710 | orchestrator | 2025-08-29 17:55:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:33.651507 | orchestrator | 2025-08-29 17:55:33 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:33.652605 | orchestrator | 2025-08-29 17:55:33 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:33.652917 | orchestrator | 2025-08-29 17:55:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:36.700514 | orchestrator | 2025-08-29 17:55:36 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:36.702606 | orchestrator | 2025-08-29 17:55:36 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:36.702640 | orchestrator | 2025-08-29 17:55:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:39.738266 | orchestrator | 2025-08-29 17:55:39 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:39.739910 | orchestrator | 2025-08-29 17:55:39 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:39.739960 | orchestrator | 2025-08-29 17:55:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:42.786880 | orchestrator | 2025-08-29 17:55:42 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:42.789046 | orchestrator | 2025-08-29 17:55:42 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:42.789099 | orchestrator | 2025-08-29 17:55:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:45.823936 | orchestrator | 2025-08-29 17:55:45 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:45.825773 | orchestrator | 2025-08-29 17:55:45 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:45.825806 | orchestrator | 2025-08-29 17:55:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:48.865604 | orchestrator | 2025-08-29 17:55:48 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:48.867321 | orchestrator | 2025-08-29 17:55:48 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:48.867654 | orchestrator | 2025-08-29 17:55:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:51.921967 | orchestrator | 2025-08-29 17:55:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:51.923566 | orchestrator | 2025-08-29 17:55:51 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:51.923736 | orchestrator | 2025-08-29 17:55:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:54.971939 | orchestrator | 2025-08-29 17:55:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:54.974601 | orchestrator | 2025-08-29 17:55:54 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:54.974655 | orchestrator | 2025-08-29 17:55:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:55:58.021477 | orchestrator | 2025-08-29 17:55:58 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:55:58.022493 | orchestrator | 2025-08-29 17:55:58 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:55:58.022562 | orchestrator | 2025-08-29 17:55:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:01.078427 | orchestrator | 2025-08-29 17:56:01 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:01.087860 | orchestrator | 2025-08-29 17:56:01 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:01.088919 | orchestrator | 2025-08-29 17:56:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:04.126313 | orchestrator | 2025-08-29 17:56:04 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:04.128582 | orchestrator | 2025-08-29 17:56:04 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:04.128622 | orchestrator | 2025-08-29 17:56:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:07.162234 | orchestrator | 2025-08-29 17:56:07 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:07.162392 | orchestrator | 2025-08-29 17:56:07 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:07.162410 | orchestrator | 2025-08-29 17:56:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:10.196956 | orchestrator | 2025-08-29 17:56:10 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:10.199671 | orchestrator | 2025-08-29 17:56:10 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:10.199721 | orchestrator | 2025-08-29 17:56:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:13.253450 | orchestrator | 2025-08-29 17:56:13 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:13.255379 | orchestrator | 2025-08-29 17:56:13 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:13.255444 | orchestrator | 2025-08-29 17:56:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:16.305983 | orchestrator | 2025-08-29 17:56:16 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:16.307172 | orchestrator | 2025-08-29 17:56:16 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:16.307211 | orchestrator | 2025-08-29 17:56:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:19.342616 | orchestrator | 2025-08-29 17:56:19 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:19.343580 | orchestrator | 2025-08-29 17:56:19 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:19.343774 | orchestrator | 2025-08-29 17:56:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:22.393142 | orchestrator | 2025-08-29 17:56:22 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:22.393883 | orchestrator | 2025-08-29 17:56:22 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:22.393922 | orchestrator | 2025-08-29 17:56:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:25.433916 | orchestrator | 2025-08-29 17:56:25 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:25.434658 | orchestrator | 2025-08-29 17:56:25 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:25.434693 | orchestrator | 2025-08-29 17:56:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:28.483016 | orchestrator | 2025-08-29 17:56:28 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:28.484821 | orchestrator | 2025-08-29 17:56:28 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:28.484843 | orchestrator | 2025-08-29 17:56:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:31.527571 | orchestrator | 2025-08-29 17:56:31 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:31.527840 | orchestrator | 2025-08-29 17:56:31 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:31.528608 | orchestrator | 2025-08-29 17:56:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:34.576055 | orchestrator | 2025-08-29 17:56:34 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:34.578138 | orchestrator | 2025-08-29 17:56:34 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:34.578178 | orchestrator | 2025-08-29 17:56:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:37.628689 | orchestrator | 2025-08-29 17:56:37 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:37.631399 | orchestrator | 2025-08-29 17:56:37 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:37.631440 | orchestrator | 2025-08-29 17:56:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:40.678273 | orchestrator | 2025-08-29 17:56:40 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:40.679629 | orchestrator | 2025-08-29 17:56:40 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:40.679661 | orchestrator | 2025-08-29 17:56:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:43.724287 | orchestrator | 2025-08-29 17:56:43 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:43.725343 | orchestrator | 2025-08-29 17:56:43 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:43.725378 | orchestrator | 2025-08-29 17:56:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:46.772530 | orchestrator | 2025-08-29 17:56:46 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:46.773128 | orchestrator | 2025-08-29 17:56:46 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:46.773170 | orchestrator | 2025-08-29 17:56:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:49.821524 | orchestrator | 2025-08-29 17:56:49 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:49.824279 | orchestrator | 2025-08-29 17:56:49 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:49.824389 | orchestrator | 2025-08-29 17:56:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:52.868295 | orchestrator | 2025-08-29 17:56:52 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:52.870100 | orchestrator | 2025-08-29 17:56:52 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:52.870148 | orchestrator | 2025-08-29 17:56:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:55.922636 | orchestrator | 2025-08-29 17:56:55 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:55.922867 | orchestrator | 2025-08-29 17:56:55 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:55.923447 | orchestrator | 2025-08-29 17:56:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:56:58.975374 | orchestrator | 2025-08-29 17:56:58 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:56:58.978077 | orchestrator | 2025-08-29 17:56:58 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state STARTED 2025-08-29 17:56:58.978127 | orchestrator | 2025-08-29 17:56:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:02.021565 | orchestrator | 2025-08-29 17:57:02 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:02.021824 | orchestrator | 2025-08-29 17:57:02 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:02.030234 | orchestrator | 2025-08-29 17:57:02.030290 | orchestrator | 2025-08-29 17:57:02.030304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 17:57:02.030348 | orchestrator | 2025-08-29 17:57:02.030359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 17:57:02.030372 | orchestrator | Friday 29 August 2025 17:49:51 +0000 (0:00:00.344) 0:00:00.344 ********* 2025-08-29 17:57:02.030383 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.030395 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.030406 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.030417 | orchestrator | 2025-08-29 17:57:02.030582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 17:57:02.030605 | orchestrator | Friday 29 August 2025 17:49:52 +0000 (0:00:00.373) 0:00:00.718 ********* 2025-08-29 17:57:02.030625 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 17:57:02.030646 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 17:57:02.030667 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 17:57:02.030702 | orchestrator | 2025-08-29 17:57:02.030713 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 17:57:02.030724 | orchestrator | 2025-08-29 17:57:02.030735 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 17:57:02.030746 | orchestrator | Friday 29 August 2025 17:49:53 +0000 (0:00:01.096) 0:00:01.815 ********* 2025-08-29 17:57:02.030757 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.030768 | orchestrator | 2025-08-29 17:57:02.030781 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 17:57:02.030800 | orchestrator | Friday 29 August 2025 17:49:54 +0000 (0:00:01.552) 0:00:03.367 ********* 2025-08-29 17:57:02.030820 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.030840 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.030867 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.030887 | orchestrator | 2025-08-29 17:57:02.030905 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 17:57:02.030926 | orchestrator | Friday 29 August 2025 17:49:57 +0000 (0:00:02.365) 0:00:05.733 ********* 2025-08-29 17:57:02.030946 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.030966 | orchestrator | 2025-08-29 17:57:02.030979 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 17:57:02.030993 | orchestrator | Friday 29 August 2025 17:49:59 +0000 (0:00:01.862) 0:00:07.596 ********* 2025-08-29 17:57:02.031005 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.031018 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.031031 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.031043 | orchestrator | 2025-08-29 17:57:02.031056 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 17:57:02.031069 | orchestrator | Friday 29 August 2025 17:49:59 +0000 (0:00:00.870) 0:00:08.466 ********* 2025-08-29 17:57:02.031082 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:57:02.031095 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:57:02.031107 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:57:02.031120 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:57:02.031133 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:57:02.031145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 17:57:02.031178 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 17:57:02.031190 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 17:57:02.031201 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 17:57:02.031212 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 17:57:02.031223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 17:57:02.031234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 17:57:02.031245 | orchestrator | 2025-08-29 17:57:02.031255 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 17:57:02.031266 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:03.111) 0:00:11.578 ********* 2025-08-29 17:57:02.031277 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 17:57:02.031288 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 17:57:02.031299 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 17:57:02.031341 | orchestrator | 2025-08-29 17:57:02.031353 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 17:57:02.031364 | orchestrator | Friday 29 August 2025 17:50:04 +0000 (0:00:01.549) 0:00:13.127 ********* 2025-08-29 17:57:02.031375 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 17:57:02.031386 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 17:57:02.031397 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 17:57:02.031407 | orchestrator | 2025-08-29 17:57:02.031418 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 17:57:02.031436 | orchestrator | Friday 29 August 2025 17:50:07 +0000 (0:00:02.977) 0:00:16.104 ********* 2025-08-29 17:57:02.031452 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 17:57:02.031471 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.031524 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 17:57:02.031537 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.031548 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 17:57:02.031559 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.031569 | orchestrator | 2025-08-29 17:57:02.031580 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 17:57:02.031591 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:02.271) 0:00:18.376 ********* 2025-08-29 17:57:02.031605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.031623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.031635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.031656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.031668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.031694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.031706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.031719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.031730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.031748 | orchestrator | 2025-08-29 17:57:02.031759 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 17:57:02.031770 | orchestrator | Friday 29 August 2025 17:50:13 +0000 (0:00:04.017) 0:00:22.393 ********* 2025-08-29 17:57:02.031781 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.031792 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.031803 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.031813 | orchestrator | 2025-08-29 17:57:02.031824 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 17:57:02.031835 | orchestrator | Friday 29 August 2025 17:50:16 +0000 (0:00:02.813) 0:00:25.206 ********* 2025-08-29 17:57:02.031845 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 17:57:02.031856 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 17:57:02.031868 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 17:57:02.031887 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 17:57:02.031906 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 17:57:02.031925 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 17:57:02.031945 | orchestrator | 2025-08-29 17:57:02.031964 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 17:57:02.031982 | orchestrator | Friday 29 August 2025 17:50:20 +0000 (0:00:04.174) 0:00:29.381 ********* 2025-08-29 17:57:02.031994 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.032005 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.032016 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.032026 | orchestrator | 2025-08-29 17:57:02.032037 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 17:57:02.032047 | orchestrator | Friday 29 August 2025 17:50:24 +0000 (0:00:04.061) 0:00:33.443 ********* 2025-08-29 17:57:02.032058 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.032069 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.032079 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.032090 | orchestrator | 2025-08-29 17:57:02.032101 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 17:57:02.032111 | orchestrator | Friday 29 August 2025 17:50:27 +0000 (0:00:03.004) 0:00:36.447 ********* 2025-08-29 17:57:02.032123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.032150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.032163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.032185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:57:02.032197 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.032209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.032220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.032231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.032253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:57:02.032265 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.032276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.032295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.032330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.032347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:57:02.032358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.032369 | orchestrator | 2025-08-29 17:57:02.032380 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 17:57:02.032391 | orchestrator | Friday 29 August 2025 17:50:29 +0000 (0:00:01.847) 0:00:38.294 ********* 2025-08-29 17:57:02.032402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.032481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:57:02.032492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.032535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:57:02.032554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.032577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850', '__omit_place_holder__ceab53ce62997845c84efe79b63af8e0eb062850'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 17:57:02.032588 | orchestrator | 2025-08-29 17:57:02.032599 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 17:57:02.032610 | orchestrator | Friday 29 August 2025 17:50:34 +0000 (0:00:04.697) 0:00:42.992 ********* 2025-08-29 17:57:02.032622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.032713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.032724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.032735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.032754 | orchestrator | 2025-08-29 17:57:02.032766 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 17:57:02.032776 | orchestrator | Friday 29 August 2025 17:50:37 +0000 (0:00:03.486) 0:00:46.478 ********* 2025-08-29 17:57:02.032787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 17:57:02.033076 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 17:57:02.033094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 17:57:02.033105 | orchestrator | 2025-08-29 17:57:02.033115 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 17:57:02.033126 | orchestrator | Friday 29 August 2025 17:50:40 +0000 (0:00:02.548) 0:00:49.026 ********* 2025-08-29 17:57:02.033137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 17:57:02.033148 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 17:57:02.033159 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 17:57:02.033169 | orchestrator | 2025-08-29 17:57:02.033180 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 17:57:02.033190 | orchestrator | Friday 29 August 2025 17:50:48 +0000 (0:00:07.992) 0:00:57.019 ********* 2025-08-29 17:57:02.033201 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.033212 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.033223 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.033233 | orchestrator | 2025-08-29 17:57:02.033244 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 17:57:02.033255 | orchestrator | Friday 29 August 2025 17:50:49 +0000 (0:00:01.115) 0:00:58.134 ********* 2025-08-29 17:57:02.033265 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 17:57:02.033277 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 17:57:02.033289 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 17:57:02.033299 | orchestrator | 2025-08-29 17:57:02.033353 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 17:57:02.033366 | orchestrator | Friday 29 August 2025 17:50:54 +0000 (0:00:04.805) 0:01:02.940 ********* 2025-08-29 17:57:02.033411 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 17:57:02.033424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 17:57:02.033435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 17:57:02.033453 | orchestrator | 2025-08-29 17:57:02.033472 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 17:57:02.033748 | orchestrator | Friday 29 August 2025 17:50:58 +0000 (0:00:03.862) 0:01:06.803 ********* 2025-08-29 17:57:02.033770 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 17:57:02.033783 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 17:57:02.033795 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 17:57:02.033808 | orchestrator | 2025-08-29 17:57:02.034924 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 17:57:02.034958 | orchestrator | Friday 29 August 2025 17:51:00 +0000 (0:00:02.344) 0:01:09.148 ********* 2025-08-29 17:57:02.034969 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 17:57:02.034980 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 17:57:02.034991 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 17:57:02.035002 | orchestrator | 2025-08-29 17:57:02.035013 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 17:57:02.035024 | orchestrator | Friday 29 August 2025 17:51:02 +0000 (0:00:01.933) 0:01:11.081 ********* 2025-08-29 17:57:02.035034 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.035045 | orchestrator | 2025-08-29 17:57:02.035056 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 17:57:02.035067 | orchestrator | Friday 29 August 2025 17:51:03 +0000 (0:00:00.902) 0:01:11.983 ********* 2025-08-29 17:57:02.035080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.035111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.035125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.035136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.035147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.035166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.035178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.035191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.035215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.035227 | orchestrator | 2025-08-29 17:57:02.035238 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 17:57:02.035250 | orchestrator | Friday 29 August 2025 17:51:07 +0000 (0:00:03.972) 0:01:15.955 ********* 2025-08-29 17:57:02.035261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.035809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.035840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.035854 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.035868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.035881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.035930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.035944 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.035955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.035967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.035985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.035996 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.036007 | orchestrator | 2025-08-29 17:57:02.036018 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 17:57:02.036029 | orchestrator | Friday 29 August 2025 17:51:08 +0000 (0:00:01.233) 0:01:17.189 ********* 2025-08-29 17:57:02.036040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.036052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.036579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.036594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.036605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.036617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.036702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.036717 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.036761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.036775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037046 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.037056 | orchestrator | 2025-08-29 17:57:02.037066 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 17:57:02.037076 | orchestrator | Friday 29 August 2025 17:51:09 +0000 (0:00:01.295) 0:01:18.484 ********* 2025-08-29 17:57:02.037117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037158 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.037168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037199 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.037235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037273 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.037282 | orchestrator | 2025-08-29 17:57:02.037292 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 17:57:02.037302 | orchestrator | Friday 29 August 2025 17:51:10 +0000 (0:00:00.726) 0:01:19.210 ********* 2025-08-29 17:57:02.037370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.037412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037487 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.037497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037527 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.037536 | orchestrator | 2025-08-29 17:57:02.037546 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 17:57:02.037556 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.757) 0:01:19.968 ********* 2025-08-29 17:57:02.037566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.037644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.037686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037755 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.037767 | orchestrator | 2025-08-29 17:57:02.037778 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 17:57:02.037790 | orchestrator | Friday 29 August 2025 17:51:12 +0000 (0:00:01.152) 0:01:21.120 ********* 2025-08-29 17:57:02.037801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.037848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.037929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.037939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.037948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.037958 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.037967 | orchestrator | 2025-08-29 17:57:02.037977 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 17:57:02.037986 | orchestrator | Friday 29 August 2025 17:51:13 +0000 (0:00:00.770) 0:01:21.891 ********* 2025-08-29 17:57:02.037995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.038005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.038075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.038087 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.038097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.038107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.038116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.038124 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.038132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.038141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.038154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.038162 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.038170 | orchestrator | 2025-08-29 17:57:02.038178 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 17:57:02.038209 | orchestrator | Friday 29 August 2025 17:51:13 +0000 (0:00:00.530) 0:01:22.422 ********* 2025-08-29 17:57:02.038218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.038226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.038235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.038243 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.038251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.038259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.038272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.038281 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.038328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 17:57:02.038339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 17:57:02.038347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 17:57:02.038355 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.038363 | orchestrator | 2025-08-29 17:57:02.038371 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 17:57:02.038379 | orchestrator | Friday 29 August 2025 17:51:14 +0000 (0:00:00.964) 0:01:23.386 ********* 2025-08-29 17:57:02.038387 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 17:57:02.038395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 17:57:02.038404 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 17:57:02.038411 | orchestrator | 2025-08-29 17:57:02.038419 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 17:57:02.038427 | orchestrator | Friday 29 August 2025 17:51:16 +0000 (0:00:01.518) 0:01:24.904 ********* 2025-08-29 17:57:02.038440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 17:57:02.038448 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 17:57:02.038456 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 17:57:02.038464 | orchestrator | 2025-08-29 17:57:02.038472 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 17:57:02.038480 | orchestrator | Friday 29 August 2025 17:51:17 +0000 (0:00:01.466) 0:01:26.371 ********* 2025-08-29 17:57:02.038488 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:57:02.038496 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:57:02.038504 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:57:02.038511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.038519 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 17:57:02.038527 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:57:02.038535 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.038543 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 17:57:02.038550 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.038558 | orchestrator | 2025-08-29 17:57:02.038566 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 17:57:02.038574 | orchestrator | Friday 29 August 2025 17:51:19 +0000 (0:00:01.225) 0:01:27.597 ********* 2025-08-29 17:57:02.038605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.038615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.038623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 17:57:02.038632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.038648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.038656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 17:57:02.038664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.038695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.038705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 17:57:02.038713 | orchestrator | 2025-08-29 17:57:02.038721 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 17:57:02.038729 | orchestrator | Friday 29 August 2025 17:51:21 +0000 (0:00:02.774) 0:01:30.372 ********* 2025-08-29 17:57:02.038737 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.038745 | orchestrator | 2025-08-29 17:57:02.038752 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 17:57:02.038765 | orchestrator | Friday 29 August 2025 17:51:22 +0000 (0:00:00.604) 0:01:30.976 ********* 2025-08-29 17:57:02.038775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 17:57:02.038784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.038793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism2025-08-29 17:57:02 | INFO  | Task 42f0e3f2-31ce-4081-85d6-a20b9bbaa106 is in state SUCCESS 2025-08-29 17:57:02.038843 | orchestrator | .tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 17:57:02.038852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 17:57:02.038865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.038874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.038882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038943 | orchestrator | 2025-08-29 17:57:02.038951 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 17:57:02.038959 | orchestrator | Friday 29 August 2025 17:51:26 +0000 (0:00:03.958) 0:01:34.935 ********* 2025-08-29 17:57:02.038968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 17:57:02.038976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.038984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.038992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039000 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.039031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 17:57:02.039046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.039054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 17:57:02.039062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.039070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.039128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039136 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.039144 | orchestrator | 2025-08-29 17:57:02.039152 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 17:57:02.039160 | orchestrator | Friday 29 August 2025 17:51:27 +0000 (0:00:00.874) 0:01:35.810 ********* 2025-08-29 17:57:02.039168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:57:02.039177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:57:02.039186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.039193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:57:02.039201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:57:02.039209 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.039217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:57:02.039246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 17:57:02.039254 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.039262 | orchestrator | 2025-08-29 17:57:02.039270 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 17:57:02.039277 | orchestrator | Friday 29 August 2025 17:51:28 +0000 (0:00:01.339) 0:01:37.149 ********* 2025-08-29 17:57:02.039285 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.039293 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.039301 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.039321 | orchestrator | 2025-08-29 17:57:02.039329 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 17:57:02.039337 | orchestrator | Friday 29 August 2025 17:51:30 +0000 (0:00:01.767) 0:01:38.917 ********* 2025-08-29 17:57:02.039345 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.039352 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.039360 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.039368 | orchestrator | 2025-08-29 17:57:02.039375 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 17:57:02.039383 | orchestrator | Friday 29 August 2025 17:51:32 +0000 (0:00:02.118) 0:01:41.036 ********* 2025-08-29 17:57:02.039391 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.039399 | orchestrator | 2025-08-29 17:57:02.039407 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 17:57:02.039420 | orchestrator | Friday 29 August 2025 17:51:33 +0000 (0:00:00.719) 0:01:41.756 ********* 2025-08-29 17:57:02.039453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.039466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.039474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.039547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039563 | orchestrator | 2025-08-29 17:57:02.039571 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 17:57:02.039578 | orchestrator | Friday 29 August 2025 17:51:39 +0000 (0:00:06.244) 0:01:48.000 ********* 2025-08-29 17:57:02.039587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.039603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039642 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.039651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.039659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039675 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.039684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.039718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.039736 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.039744 | orchestrator | 2025-08-29 17:57:02.039752 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 17:57:02.039760 | orchestrator | Friday 29 August 2025 17:51:40 +0000 (0:00:00.964) 0:01:48.965 ********* 2025-08-29 17:57:02.039768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:57:02.039776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:57:02.039785 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.039793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:57:02.039801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:57:02.039808 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.039816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:57:02.039824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 17:57:02.039832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.039845 | orchestrator | 2025-08-29 17:57:02.039852 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 17:57:02.039860 | orchestrator | Friday 29 August 2025 17:51:41 +0000 (0:00:00.925) 0:01:49.890 ********* 2025-08-29 17:57:02.039868 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.039876 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.039883 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.039891 | orchestrator | 2025-08-29 17:57:02.039899 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 17:57:02.039906 | orchestrator | Friday 29 August 2025 17:51:42 +0000 (0:00:01.300) 0:01:51.190 ********* 2025-08-29 17:57:02.039914 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.039922 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.039929 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.039937 | orchestrator | 2025-08-29 17:57:02.039945 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 17:57:02.039953 | orchestrator | Friday 29 August 2025 17:51:44 +0000 (0:00:02.127) 0:01:53.318 ********* 2025-08-29 17:57:02.039960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.039968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.039976 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.039983 | orchestrator | 2025-08-29 17:57:02.039991 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 17:57:02.039999 | orchestrator | Friday 29 August 2025 17:51:45 +0000 (0:00:00.562) 0:01:53.880 ********* 2025-08-29 17:57:02.040006 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.040014 | orchestrator | 2025-08-29 17:57:02.040022 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 17:57:02.040030 | orchestrator | Friday 29 August 2025 17:51:46 +0000 (0:00:00.708) 0:01:54.588 ********* 2025-08-29 17:57:02.040062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 17:57:02.040073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 17:57:02.040082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 17:57:02.040095 | orchestrator | 2025-08-29 17:57:02.040103 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 17:57:02.040111 | orchestrator | Friday 29 August 2025 17:51:48 +0000 (0:00:02.578) 0:01:57.168 ********* 2025-08-29 17:57:02.040119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 17:57:02.040127 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.040136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 17:57:02.040144 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.040160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 17:57:02.040169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.040177 | orchestrator | 2025-08-29 17:57:02.040185 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 17:57:02.040192 | orchestrator | Friday 29 August 2025 17:51:50 +0000 (0:00:02.276) 0:01:59.444 ********* 2025-08-29 17:57:02.040201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:57:02.040216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:57:02.040226 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.040234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:57:02.040242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:57:02.040250 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.040258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:57:02.040267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 17:57:02.040274 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.040282 | orchestrator | 2025-08-29 17:57:02.040290 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 17:57:02.040298 | orchestrator | Friday 29 August 2025 17:51:52 +0000 (0:00:01.680) 0:02:01.125 ********* 2025-08-29 17:57:02.040347 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.040357 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.040365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.040373 | orchestrator | 2025-08-29 17:57:02.040380 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 17:57:02.040388 | orchestrator | Friday 29 August 2025 17:51:53 +0000 (0:00:00.579) 0:02:01.704 ********* 2025-08-29 17:57:02.040396 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.040404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.040422 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.040430 | orchestrator | 2025-08-29 17:57:02.040438 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 17:57:02.040446 | orchestrator | Friday 29 August 2025 17:51:54 +0000 (0:00:01.567) 0:02:03.272 ********* 2025-08-29 17:57:02.040454 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.040462 | orchestrator | 2025-08-29 17:57:02.040469 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 17:57:02.040477 | orchestrator | Friday 29 August 2025 17:51:56 +0000 (0:00:01.343) 0:02:04.615 ********* 2025-08-29 17:57:02.040490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.040500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.040550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.040583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040621 | orchestrator | 2025-08-29 17:57:02.040629 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 17:57:02.040637 | orchestrator | Friday 29 August 2025 17:52:01 +0000 (0:00:05.072) 0:02:09.687 ********* 2025-08-29 17:57:02.040645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.040653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.040699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.040708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040716 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.040724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040779 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.040787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.040796 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.040803 | orchestrator | 2025-08-29 17:57:02.040811 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 17:57:02.040819 | orchestrator | Friday 29 August 2025 17:52:01 +0000 (0:00:00.778) 0:02:10.466 ********* 2025-08-29 17:57:02.040827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:57:02.040835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:57:02.040849 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.040857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:57:02.040865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:57:02.040873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.040887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:57:02.040895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 17:57:02.040901 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.040908 | orchestrator | 2025-08-29 17:57:02.040915 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 17:57:02.040921 | orchestrator | Friday 29 August 2025 17:52:03 +0000 (0:00:01.542) 0:02:12.008 ********* 2025-08-29 17:57:02.040928 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.040934 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.040941 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.040947 | orchestrator | 2025-08-29 17:57:02.040954 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 17:57:02.040961 | orchestrator | Friday 29 August 2025 17:52:04 +0000 (0:00:01.409) 0:02:13.417 ********* 2025-08-29 17:57:02.040967 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.040973 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.040980 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.040987 | orchestrator | 2025-08-29 17:57:02.040993 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 17:57:02.041000 | orchestrator | Friday 29 August 2025 17:52:07 +0000 (0:00:02.544) 0:02:15.961 ********* 2025-08-29 17:57:02.041006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.041013 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.041019 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.041026 | orchestrator | 2025-08-29 17:57:02.041032 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 17:57:02.041039 | orchestrator | Friday 29 August 2025 17:52:07 +0000 (0:00:00.387) 0:02:16.349 ********* 2025-08-29 17:57:02.041045 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.041052 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.041058 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.041065 | orchestrator | 2025-08-29 17:57:02.041071 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 17:57:02.041078 | orchestrator | Friday 29 August 2025 17:52:08 +0000 (0:00:00.578) 0:02:16.928 ********* 2025-08-29 17:57:02.041085 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.041091 | orchestrator | 2025-08-29 17:57:02.041097 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 17:57:02.041104 | orchestrator | Friday 29 August 2025 17:52:09 +0000 (0:00:00.866) 0:02:17.794 ********* 2025-08-29 17:57:02.041111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:57:02.041123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:57:02.041134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:57:02.041186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:57:02.041199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 17:57:02.041213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:57:02.041232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041301 | orchestrator | 2025-08-29 17:57:02.041318 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 17:57:02.041326 | orchestrator | Friday 29 August 2025 17:52:13 +0000 (0:00:03.995) 0:02:21.789 ********* 2025-08-29 17:57:02.041340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:57:02.041348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:57:02.041355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:57:02.041380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:57:02.041403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 17:57:02.041410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 17:57:02.041435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041463 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.041470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041523 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.041530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.041536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.041547 | orchestrator | 2025-08-29 17:57:02.041554 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 17:57:02.041561 | orchestrator | Friday 29 August 2025 17:52:14 +0000 (0:00:01.379) 0:02:23.168 ********* 2025-08-29 17:57:02.041568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:57:02.041574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:57:02.041583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.041589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:57:02.041596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:57:02.041603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.041609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:57:02.041616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 17:57:02.041622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.041629 | orchestrator | 2025-08-29 17:57:02.041636 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 17:57:02.041642 | orchestrator | Friday 29 August 2025 17:52:15 +0000 (0:00:01.154) 0:02:24.323 ********* 2025-08-29 17:57:02.041649 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.041655 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.041662 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.041668 | orchestrator | 2025-08-29 17:57:02.041675 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 17:57:02.041682 | orchestrator | Friday 29 August 2025 17:52:17 +0000 (0:00:01.291) 0:02:25.615 ********* 2025-08-29 17:57:02.041688 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.041695 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.041701 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.041708 | orchestrator | 2025-08-29 17:57:02.041714 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 17:57:02.041721 | orchestrator | Friday 29 August 2025 17:52:19 +0000 (0:00:02.020) 0:02:27.635 ********* 2025-08-29 17:57:02.041727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.041734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.041740 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.041747 | orchestrator | 2025-08-29 17:57:02.041753 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 17:57:02.041760 | orchestrator | Friday 29 August 2025 17:52:19 +0000 (0:00:00.667) 0:02:28.303 ********* 2025-08-29 17:57:02.041766 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.041773 | orchestrator | 2025-08-29 17:57:02.041779 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 17:57:02.041786 | orchestrator | Friday 29 August 2025 17:52:20 +0000 (0:00:01.021) 0:02:29.325 ********* 2025-08-29 17:57:02.041802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:57:02.041816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.043386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:57:02.043452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.043470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 17:57:02.043499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.043556 | orchestrator | 2025-08-29 17:57:02.043564 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 17:57:02.043571 | orchestrator | Friday 29 August 2025 17:52:25 +0000 (0:00:04.375) 0:02:33.700 ********* 2025-08-29 17:57:02.043588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:57:02.043623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:57:02.043632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.043646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.043666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.043674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.043682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 17:57:02.043698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.043710 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.043717 | orchestrator | 2025-08-29 17:57:02.043887 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 17:57:02.043898 | orchestrator | Friday 29 August 2025 17:52:29 +0000 (0:00:03.862) 0:02:37.562 ********* 2025-08-29 17:57:02.043905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:57:02.043913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:57:02.043921 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.043928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:57:02.043936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:57:02.043954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:57:02.043965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 17:57:02.043973 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.043980 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.043987 | orchestrator | 2025-08-29 17:57:02.043994 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 17:57:02.044001 | orchestrator | Friday 29 August 2025 17:52:32 +0000 (0:00:03.533) 0:02:41.096 ********* 2025-08-29 17:57:02.044008 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.044059 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.044069 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.044075 | orchestrator | 2025-08-29 17:57:02.044082 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 17:57:02.044089 | orchestrator | Friday 29 August 2025 17:52:34 +0000 (0:00:01.430) 0:02:42.527 ********* 2025-08-29 17:57:02.044095 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.044102 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.044108 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.044115 | orchestrator | 2025-08-29 17:57:02.044122 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 17:57:02.044128 | orchestrator | Friday 29 August 2025 17:52:36 +0000 (0:00:02.244) 0:02:44.771 ********* 2025-08-29 17:57:02.044135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.044141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.044148 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.044155 | orchestrator | 2025-08-29 17:57:02.044161 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 17:57:02.044168 | orchestrator | Friday 29 August 2025 17:52:36 +0000 (0:00:00.537) 0:02:45.309 ********* 2025-08-29 17:57:02.044175 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.044181 | orchestrator | 2025-08-29 17:57:02.044188 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 17:57:02.044195 | orchestrator | Friday 29 August 2025 17:52:37 +0000 (0:00:00.932) 0:02:46.241 ********* 2025-08-29 17:57:02.044202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:57:02.044215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:57:02.044222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 17:57:02.044230 | orchestrator | 2025-08-29 17:57:02.044273 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 17:57:02.044283 | orchestrator | Friday 29 August 2025 17:52:41 +0000 (0:00:03.494) 0:02:49.736 ********* 2025-08-29 17:57:02.044294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:57:02.044301 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.044325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:57:02.044332 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.044339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 17:57:02.044507 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.044520 | orchestrator | 2025-08-29 17:57:02.044527 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 17:57:02.044582 | orchestrator | Friday 29 August 2025 17:52:42 +0000 (0:00:00.837) 0:02:50.573 ********* 2025-08-29 17:57:02.044592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:57:02.044599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:57:02.044606 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.044613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:57:02.044619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:57:02.044626 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.044633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:57:02.044639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 17:57:02.044646 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.044653 | orchestrator | 2025-08-29 17:57:02.044659 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 17:57:02.044666 | orchestrator | Friday 29 August 2025 17:52:42 +0000 (0:00:00.688) 0:02:51.262 ********* 2025-08-29 17:57:02.044673 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.044679 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.044686 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.044693 | orchestrator | 2025-08-29 17:57:02.044716 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 17:57:02.044723 | orchestrator | Friday 29 August 2025 17:52:44 +0000 (0:00:01.410) 0:02:52.672 ********* 2025-08-29 17:57:02.044734 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.044741 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.044747 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.044787 | orchestrator | 2025-08-29 17:57:02.044794 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 17:57:02.044820 | orchestrator | Friday 29 August 2025 17:52:46 +0000 (0:00:02.319) 0:02:54.991 ********* 2025-08-29 17:57:02.044828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.044834 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.044841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.044848 | orchestrator | 2025-08-29 17:57:02.044854 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 17:57:02.044861 | orchestrator | Friday 29 August 2025 17:52:47 +0000 (0:00:00.680) 0:02:55.672 ********* 2025-08-29 17:57:02.044867 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.044874 | orchestrator | 2025-08-29 17:57:02.044880 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 17:57:02.044887 | orchestrator | Friday 29 August 2025 17:52:48 +0000 (0:00:01.190) 0:02:56.863 ********* 2025-08-29 17:57:02.044895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:57:02.044917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:57:02.044998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 17:57:02.045008 | orchestrator | 2025-08-29 17:57:02.045015 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 17:57:02.045021 | orchestrator | Friday 29 August 2025 17:52:53 +0000 (0:00:05.610) 0:03:02.474 ********* 2025-08-29 17:57:02.045039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:57:02.045052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.045060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:57:02.045067 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.045119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 17:57:02.045808 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.045827 | orchestrator | 2025-08-29 17:57:02.045834 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 17:57:02.045841 | orchestrator | Friday 29 August 2025 17:52:55 +0000 (0:00:01.350) 0:03:03.824 ********* 2025-08-29 17:57:02.045849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:57:02.045859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:57:02.045866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:57:02.045874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:57:02.045882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 17:57:02.045889 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.045896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:57:02.045927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:57:02.045935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:57:02.045950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:57:02.045957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:57:02.045964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:57:02.045971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 17:57:02.045977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 17:57:02.045984 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.045991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 17:57:02.045998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 17:57:02.046005 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.046011 | orchestrator | 2025-08-29 17:57:02.046045 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 17:57:02.046078 | orchestrator | Friday 29 August 2025 17:52:56 +0000 (0:00:01.455) 0:03:05.279 ********* 2025-08-29 17:57:02.046087 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.046094 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.046100 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.046107 | orchestrator | 2025-08-29 17:57:02.046114 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 17:57:02.046120 | orchestrator | Friday 29 August 2025 17:52:58 +0000 (0:00:01.488) 0:03:06.768 ********* 2025-08-29 17:57:02.046127 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.046133 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.046140 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.046147 | orchestrator | 2025-08-29 17:57:02.046440 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 17:57:02.046454 | orchestrator | Friday 29 August 2025 17:53:00 +0000 (0:00:02.305) 0:03:09.073 ********* 2025-08-29 17:57:02.046461 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.046467 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.046474 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.046481 | orchestrator | 2025-08-29 17:57:02.046488 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 17:57:02.046494 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:00.576) 0:03:09.650 ********* 2025-08-29 17:57:02.046501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.046508 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.046522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.046529 | orchestrator | 2025-08-29 17:57:02.046536 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 17:57:02.046542 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:00.450) 0:03:10.101 ********* 2025-08-29 17:57:02.046549 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.046556 | orchestrator | 2025-08-29 17:57:02.046562 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 17:57:02.046615 | orchestrator | Friday 29 August 2025 17:53:02 +0000 (0:00:01.269) 0:03:11.370 ********* 2025-08-29 17:57:02.046629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:57:02.046639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:57:02.046646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:57:02.046654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:57:02.046666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:57:02.046768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:57:02.046817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 17:57:02.046825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:57:02.046987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:57:02.046996 | orchestrator | 2025-08-29 17:57:02.047003 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 17:57:02.047009 | orchestrator | Friday 29 August 2025 17:53:06 +0000 (0:00:03.880) 0:03:15.251 ********* 2025-08-29 17:57:02.047016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:57:02.047050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:57:02.047059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:57:02.047065 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.047072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:57:02.047079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:57:02.047085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:57:02.047096 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.047160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 17:57:02.047171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 17:57:02.047177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 17:57:02.047184 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.047190 | orchestrator | 2025-08-29 17:57:02.047196 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 17:57:02.047202 | orchestrator | Friday 29 August 2025 17:53:07 +0000 (0:00:00.714) 0:03:15.966 ********* 2025-08-29 17:57:02.047210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:57:02.047217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:57:02.047224 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.047230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:57:02.047241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:57:02.047247 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.047254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:57:02.047260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 17:57:02.047266 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.047272 | orchestrator | 2025-08-29 17:57:02.047278 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 17:57:02.047284 | orchestrator | Friday 29 August 2025 17:53:08 +0000 (0:00:00.966) 0:03:16.933 ********* 2025-08-29 17:57:02.047291 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.047297 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.047303 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.047326 | orchestrator | 2025-08-29 17:57:02.047333 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 17:57:02.047359 | orchestrator | Friday 29 August 2025 17:53:10 +0000 (0:00:01.669) 0:03:18.603 ********* 2025-08-29 17:57:02.047366 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.047373 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.047379 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.047385 | orchestrator | 2025-08-29 17:57:02.047394 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 17:57:02.047467 | orchestrator | Friday 29 August 2025 17:53:12 +0000 (0:00:02.262) 0:03:20.865 ********* 2025-08-29 17:57:02.047475 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.047481 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.047488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.047494 | orchestrator | 2025-08-29 17:57:02.047500 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 17:57:02.047548 | orchestrator | Friday 29 August 2025 17:53:12 +0000 (0:00:00.347) 0:03:21.213 ********* 2025-08-29 17:57:02.047555 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.047561 | orchestrator | 2025-08-29 17:57:02.047567 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 17:57:02.047573 | orchestrator | Friday 29 August 2025 17:53:13 +0000 (0:00:01.104) 0:03:22.318 ********* 2025-08-29 17:57:02.047580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:57:02.047639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.047649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:57:02.047677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.047690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 17:57:02.047698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.047866 | orchestrator | 2025-08-29 17:57:02.047875 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 17:57:02.047882 | orchestrator | Friday 29 August 2025 17:53:18 +0000 (0:00:04.345) 0:03:26.664 ********* 2025-08-29 17:57:02.047889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:57:02.047895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.047902 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.047961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:57:02.047971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.047977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.047989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 17:57:02.047995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048002 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.048008 | orchestrator | 2025-08-29 17:57:02.048014 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 17:57:02.048020 | orchestrator | Friday 29 August 2025 17:53:18 +0000 (0:00:00.791) 0:03:27.455 ********* 2025-08-29 17:57:02.048027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:57:02.048035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:57:02.048041 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.048047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:57:02.048053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:57:02.048059 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.048084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:57:02.048095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 17:57:02.048101 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.048107 | orchestrator | 2025-08-29 17:57:02.048114 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 17:57:02.048186 | orchestrator | Friday 29 August 2025 17:53:19 +0000 (0:00:01.040) 0:03:28.495 ********* 2025-08-29 17:57:02.048195 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.048201 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.048207 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.048213 | orchestrator | 2025-08-29 17:57:02.048220 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 17:57:02.048229 | orchestrator | Friday 29 August 2025 17:53:21 +0000 (0:00:01.837) 0:03:30.333 ********* 2025-08-29 17:57:02.048236 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.048242 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.048248 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.048675 | orchestrator | 2025-08-29 17:57:02.048690 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 17:57:02.048695 | orchestrator | Friday 29 August 2025 17:53:23 +0000 (0:00:02.088) 0:03:32.422 ********* 2025-08-29 17:57:02.048701 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.048706 | orchestrator | 2025-08-29 17:57:02.048712 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 17:57:02.048717 | orchestrator | Friday 29 August 2025 17:53:25 +0000 (0:00:01.126) 0:03:33.548 ********* 2025-08-29 17:57:02.048723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 17:57:02.048730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 17:57:02.048937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.048955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 17:57:02.049003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049032 | orchestrator | 2025-08-29 17:57:02.049038 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 17:57:02.049044 | orchestrator | Friday 29 August 2025 17:53:29 +0000 (0:00:04.609) 0:03:38.158 ********* 2025-08-29 17:57:02.049050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 17:57:02.049055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.049140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 17:57:02.049146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 17:57:02.049200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049211 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.049325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.049347 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.049352 | orchestrator | 2025-08-29 17:57:02.049358 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 17:57:02.049364 | orchestrator | Friday 29 August 2025 17:53:31 +0000 (0:00:01.455) 0:03:39.613 ********* 2025-08-29 17:57:02.049401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:57:02.049409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:57:02.049415 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.049420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:57:02.049426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:57:02.049431 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.049444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:57:02.049450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 17:57:02.049456 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.049466 | orchestrator | 2025-08-29 17:57:02.049471 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 17:57:02.049477 | orchestrator | Friday 29 August 2025 17:53:32 +0000 (0:00:00.959) 0:03:40.573 ********* 2025-08-29 17:57:02.049666 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.049672 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.049677 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.049683 | orchestrator | 2025-08-29 17:57:02.049688 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 17:57:02.049693 | orchestrator | Friday 29 August 2025 17:53:33 +0000 (0:00:01.305) 0:03:41.878 ********* 2025-08-29 17:57:02.049699 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.049704 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.049709 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.049715 | orchestrator | 2025-08-29 17:57:02.049720 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 17:57:02.049725 | orchestrator | Friday 29 August 2025 17:53:35 +0000 (0:00:02.226) 0:03:44.104 ********* 2025-08-29 17:57:02.049731 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.049736 | orchestrator | 2025-08-29 17:57:02.049742 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 17:57:02.049791 | orchestrator | Friday 29 August 2025 17:53:37 +0000 (0:00:01.484) 0:03:45.589 ********* 2025-08-29 17:57:02.049799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:57:02.049804 | orchestrator | 2025-08-29 17:57:02.049810 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 17:57:02.049819 | orchestrator | Friday 29 August 2025 17:53:40 +0000 (0:00:03.062) 0:03:48.652 ********* 2025-08-29 17:57:02.049825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:57:02.049833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:57:02.049847 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.049891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:57:02.049900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:57:02.049906 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.049912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:57:02.049925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:57:02.049930 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.049936 | orchestrator | 2025-08-29 17:57:02.049941 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 17:57:02.049947 | orchestrator | Friday 29 August 2025 17:53:42 +0000 (0:00:02.636) 0:03:51.288 ********* 2025-08-29 17:57:02.050464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:57:02.050488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:57:02.050501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.050507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:57:02.050574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:57:02.050583 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.050589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 17:57:02.050600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 17:57:02.050606 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.050611 | orchestrator | 2025-08-29 17:57:02.050617 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 17:57:02.050622 | orchestrator | Friday 29 August 2025 17:53:45 +0000 (0:00:02.739) 0:03:54.027 ********* 2025-08-29 17:57:02.050628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:57:02.050675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:57:02.050684 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.050690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:57:02.050695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:57:02.050701 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.050711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:57:02.050716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 17:57:02.050722 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.050727 | orchestrator | 2025-08-29 17:57:02.050733 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 17:57:02.050738 | orchestrator | Friday 29 August 2025 17:53:47 +0000 (0:00:02.474) 0:03:56.502 ********* 2025-08-29 17:57:02.050744 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.050749 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.050754 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.050760 | orchestrator | 2025-08-29 17:57:02.050765 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 17:57:02.050770 | orchestrator | Friday 29 August 2025 17:53:50 +0000 (0:00:02.144) 0:03:58.646 ********* 2025-08-29 17:57:02.050775 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.050781 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.050786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.050792 | orchestrator | 2025-08-29 17:57:02.050797 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 17:57:02.050802 | orchestrator | Friday 29 August 2025 17:53:52 +0000 (0:00:01.884) 0:04:00.531 ********* 2025-08-29 17:57:02.050808 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.050813 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.050818 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.050824 | orchestrator | 2025-08-29 17:57:02.050829 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 17:57:02.050834 | orchestrator | Friday 29 August 2025 17:53:52 +0000 (0:00:00.580) 0:04:01.111 ********* 2025-08-29 17:57:02.050840 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.050845 | orchestrator | 2025-08-29 17:57:02.050850 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 17:57:02.050905 | orchestrator | Friday 29 August 2025 17:53:53 +0000 (0:00:01.210) 0:04:02.321 ********* 2025-08-29 17:57:02.050918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 17:57:02.050928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 17:57:02.050935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 17:57:02.050940 | orchestrator | 2025-08-29 17:57:02.050946 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 17:57:02.050951 | orchestrator | Friday 29 August 2025 17:53:55 +0000 (0:00:01.416) 0:04:03.737 ********* 2025-08-29 17:57:02.050957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 17:57:02.050962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 17:57:02.050968 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.050973 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.051017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 17:57:02.051030 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.051036 | orchestrator | 2025-08-29 17:57:02.051041 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 17:57:02.051046 | orchestrator | Friday 29 August 2025 17:53:56 +0000 (0:00:00.780) 0:04:04.518 ********* 2025-08-29 17:57:02.051052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 17:57:02.051059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 17:57:02.051064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.051070 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.051076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 17:57:02.051081 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.051087 | orchestrator | 2025-08-29 17:57:02.051092 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 17:57:02.051098 | orchestrator | Friday 29 August 2025 17:53:56 +0000 (0:00:00.662) 0:04:05.180 ********* 2025-08-29 17:57:02.051103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.051108 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.051114 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.051119 | orchestrator | 2025-08-29 17:57:02.051124 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 17:57:02.051130 | orchestrator | Friday 29 August 2025 17:53:57 +0000 (0:00:00.512) 0:04:05.693 ********* 2025-08-29 17:57:02.051135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.051140 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.051146 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.051151 | orchestrator | 2025-08-29 17:57:02.051156 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 17:57:02.051162 | orchestrator | Friday 29 August 2025 17:53:58 +0000 (0:00:01.491) 0:04:07.185 ********* 2025-08-29 17:57:02.051167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.051181 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.051187 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.051192 | orchestrator | 2025-08-29 17:57:02.051198 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 17:57:02.051203 | orchestrator | Friday 29 August 2025 17:53:59 +0000 (0:00:00.663) 0:04:07.848 ********* 2025-08-29 17:57:02.051208 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.051214 | orchestrator | 2025-08-29 17:57:02.051219 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 17:57:02.051224 | orchestrator | Friday 29 August 2025 17:54:00 +0000 (0:00:01.347) 0:04:09.196 ********* 2025-08-29 17:57:02.051230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:57:02.051279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:57:02.051323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.051403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:57:02.051409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.051581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.051587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:57:02.051593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.051673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 17:57:02.051731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.051760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.051810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:57:02.051829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.051917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.051938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.051980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.052008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052021 | orchestrator | 2025-08-29 17:57:02.052026 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 17:57:02.052038 | orchestrator | Friday 29 August 2025 17:54:05 +0000 (0:00:05.225) 0:04:14.422 ********* 2025-08-29 17:57:02.052043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:57:02.052049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:57:02.052120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:57:02.052210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:57:02.052395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.052406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 17:57:02.052501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.052552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 17:57:02.052671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.052760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052858 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.052867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 17:57:02.052903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 17:57:02.052945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 17:57:02.052957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.052967 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.052972 | orchestrator | 2025-08-29 17:57:02.052977 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 17:57:02.052983 | orchestrator | Friday 29 August 2025 17:54:07 +0000 (0:00:01.783) 0:04:16.205 ********* 2025-08-29 17:57:02.052988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:57:02.052993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:57:02.052998 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.053003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:57:02.053019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:57:02.053024 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.053029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:57:02.053034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 17:57:02.053039 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.053043 | orchestrator | 2025-08-29 17:57:02.053048 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 17:57:02.053053 | orchestrator | Friday 29 August 2025 17:54:09 +0000 (0:00:01.713) 0:04:17.919 ********* 2025-08-29 17:57:02.053058 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.053063 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.053067 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.053072 | orchestrator | 2025-08-29 17:57:02.053077 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 17:57:02.053082 | orchestrator | Friday 29 August 2025 17:54:11 +0000 (0:00:02.037) 0:04:19.956 ********* 2025-08-29 17:57:02.053087 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.053091 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.053096 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.053101 | orchestrator | 2025-08-29 17:57:02.053106 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 17:57:02.053111 | orchestrator | Friday 29 August 2025 17:54:13 +0000 (0:00:02.187) 0:04:22.143 ********* 2025-08-29 17:57:02.053115 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.053120 | orchestrator | 2025-08-29 17:57:02.053125 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 17:57:02.053130 | orchestrator | Friday 29 August 2025 17:54:15 +0000 (0:00:01.790) 0:04:23.933 ********* 2025-08-29 17:57:02.053153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.053165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.053170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.053176 | orchestrator | 2025-08-29 17:57:02.053180 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 17:57:02.053185 | orchestrator | Friday 29 August 2025 17:54:18 +0000 (0:00:03.466) 0:04:27.400 ********* 2025-08-29 17:57:02.053190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.053195 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.053214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.053225 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.053232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.053238 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.053243 | orchestrator | 2025-08-29 17:57:02.053247 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 17:57:02.053252 | orchestrator | Friday 29 August 2025 17:54:19 +0000 (0:00:00.966) 0:04:28.366 ********* 2025-08-29 17:57:02.053257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053268 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.053273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053282 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.053287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053297 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.053302 | orchestrator | 2025-08-29 17:57:02.053323 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 17:57:02.053329 | orchestrator | Friday 29 August 2025 17:54:20 +0000 (0:00:00.983) 0:04:29.350 ********* 2025-08-29 17:57:02.053333 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.053343 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.053348 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.053352 | orchestrator | 2025-08-29 17:57:02.053357 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 17:57:02.053362 | orchestrator | Friday 29 August 2025 17:54:22 +0000 (0:00:01.236) 0:04:30.587 ********* 2025-08-29 17:57:02.053367 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.053371 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.053376 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.053381 | orchestrator | 2025-08-29 17:57:02.053386 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 17:57:02.053391 | orchestrator | Friday 29 August 2025 17:54:24 +0000 (0:00:02.049) 0:04:32.637 ********* 2025-08-29 17:57:02.053395 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.053400 | orchestrator | 2025-08-29 17:57:02.053405 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 17:57:02.053410 | orchestrator | Friday 29 August 2025 17:54:25 +0000 (0:00:01.745) 0:04:34.382 ********* 2025-08-29 17:57:02.053433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.053440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.053461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.053496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053512 | orchestrator | 2025-08-29 17:57:02.053518 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 17:57:02.053523 | orchestrator | Friday 29 August 2025 17:54:30 +0000 (0:00:05.119) 0:04:39.501 ********* 2025-08-29 17:57:02.053543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.053551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053563 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.053569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.053579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.053677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.053684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.053702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.053707 | orchestrator | 2025-08-29 17:57:02.053713 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 17:57:02.053719 | orchestrator | Friday 29 August 2025 17:54:31 +0000 (0:00:00.733) 0:04:40.235 ********* 2025-08-29 17:57:02.053727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053762 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.053771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053829 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.053835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 17:57:02.053857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.053862 | orchestrator | 2025-08-29 17:57:02.053872 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 17:57:02.053877 | orchestrator | Friday 29 August 2025 17:54:33 +0000 (0:00:01.378) 0:04:41.614 ********* 2025-08-29 17:57:02.053882 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.053887 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.053891 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.053896 | orchestrator | 2025-08-29 17:57:02.053901 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 17:57:02.053906 | orchestrator | Friday 29 August 2025 17:54:34 +0000 (0:00:01.419) 0:04:43.033 ********* 2025-08-29 17:57:02.053910 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.053915 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.053920 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.053924 | orchestrator | 2025-08-29 17:57:02.053929 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 17:57:02.053934 | orchestrator | Friday 29 August 2025 17:54:36 +0000 (0:00:02.256) 0:04:45.289 ********* 2025-08-29 17:57:02.053939 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.053946 | orchestrator | 2025-08-29 17:57:02.053954 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 17:57:02.053962 | orchestrator | Friday 29 August 2025 17:54:38 +0000 (0:00:01.660) 0:04:46.949 ********* 2025-08-29 17:57:02.053970 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 17:57:02.053978 | orchestrator | 2025-08-29 17:57:02.053985 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 17:57:02.053990 | orchestrator | Friday 29 August 2025 17:54:39 +0000 (0:00:00.889) 0:04:47.839 ********* 2025-08-29 17:57:02.053995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 17:57:02.054001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 17:57:02.054006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 17:57:02.054011 | orchestrator | 2025-08-29 17:57:02.054039 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 17:57:02.054062 | orchestrator | Friday 29 August 2025 17:54:44 +0000 (0:00:04.921) 0:04:52.760 ********* 2025-08-29 17:57:02.054071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054081 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054101 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054106 | orchestrator | 2025-08-29 17:57:02.054111 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 17:57:02.054116 | orchestrator | Friday 29 August 2025 17:54:45 +0000 (0:00:01.530) 0:04:54.290 ********* 2025-08-29 17:57:02.054121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:57:02.054126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:57:02.054132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:57:02.054142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:57:02.054147 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:57:02.054157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 17:57:02.054161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054166 | orchestrator | 2025-08-29 17:57:02.054171 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 17:57:02.054176 | orchestrator | Friday 29 August 2025 17:54:47 +0000 (0:00:01.697) 0:04:55.988 ********* 2025-08-29 17:57:02.054180 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.054185 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.054190 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.054195 | orchestrator | 2025-08-29 17:57:02.054204 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 17:57:02.054209 | orchestrator | Friday 29 August 2025 17:54:50 +0000 (0:00:02.522) 0:04:58.511 ********* 2025-08-29 17:57:02.054213 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.054218 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.054225 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.054233 | orchestrator | 2025-08-29 17:57:02.054243 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 17:57:02.054264 | orchestrator | Friday 29 August 2025 17:54:53 +0000 (0:00:03.268) 0:05:01.779 ********* 2025-08-29 17:57:02.054272 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 17:57:02.054277 | orchestrator | 2025-08-29 17:57:02.054282 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 17:57:02.054287 | orchestrator | Friday 29 August 2025 17:54:54 +0000 (0:00:01.508) 0:05:03.288 ********* 2025-08-29 17:57:02.054292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054297 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054346 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054361 | orchestrator | 2025-08-29 17:57:02.054366 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 17:57:02.054371 | orchestrator | Friday 29 August 2025 17:54:56 +0000 (0:00:01.360) 0:05:04.648 ********* 2025-08-29 17:57:02.054376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054396 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 17:57:02.054406 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054410 | orchestrator | 2025-08-29 17:57:02.054415 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 17:57:02.054435 | orchestrator | Friday 29 August 2025 17:54:57 +0000 (0:00:01.409) 0:05:06.058 ********* 2025-08-29 17:57:02.054440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054445 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054450 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054455 | orchestrator | 2025-08-29 17:57:02.054462 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 17:57:02.054467 | orchestrator | Friday 29 August 2025 17:54:59 +0000 (0:00:02.034) 0:05:08.092 ********* 2025-08-29 17:57:02.054472 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.054477 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.054482 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.054486 | orchestrator | 2025-08-29 17:57:02.054491 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 17:57:02.054496 | orchestrator | Friday 29 August 2025 17:55:02 +0000 (0:00:02.712) 0:05:10.804 ********* 2025-08-29 17:57:02.054501 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.054505 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.054510 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.054515 | orchestrator | 2025-08-29 17:57:02.054520 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 17:57:02.054524 | orchestrator | Friday 29 August 2025 17:55:05 +0000 (0:00:03.198) 0:05:14.003 ********* 2025-08-29 17:57:02.054529 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 17:57:02.054534 | orchestrator | 2025-08-29 17:57:02.054539 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 17:57:02.054544 | orchestrator | Friday 29 August 2025 17:55:06 +0000 (0:00:00.933) 0:05:14.937 ********* 2025-08-29 17:57:02.054549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:57:02.054554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:57:02.054567 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:57:02.054577 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054582 | orchestrator | 2025-08-29 17:57:02.054587 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 17:57:02.054592 | orchestrator | Friday 29 August 2025 17:55:07 +0000 (0:00:01.457) 0:05:16.394 ********* 2025-08-29 17:57:02.054597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:57:02.054602 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:57:02.054626 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 17:57:02.054639 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054644 | orchestrator | 2025-08-29 17:57:02.054648 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 17:57:02.054653 | orchestrator | Friday 29 August 2025 17:55:09 +0000 (0:00:01.618) 0:05:18.013 ********* 2025-08-29 17:57:02.054658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.054667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.054672 | orchestrator | 2025-08-29 17:57:02.054677 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 17:57:02.054682 | orchestrator | Friday 29 August 2025 17:55:11 +0000 (0:00:01.584) 0:05:19.597 ********* 2025-08-29 17:57:02.054687 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.054691 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.054696 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.054700 | orchestrator | 2025-08-29 17:57:02.054705 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 17:57:02.054709 | orchestrator | Friday 29 August 2025 17:55:13 +0000 (0:00:02.510) 0:05:22.108 ********* 2025-08-29 17:57:02.054717 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.054721 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.054726 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.054730 | orchestrator | 2025-08-29 17:57:02.054735 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 17:57:02.054740 | orchestrator | Friday 29 August 2025 17:55:17 +0000 (0:00:03.431) 0:05:25.539 ********* 2025-08-29 17:57:02.054744 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.054749 | orchestrator | 2025-08-29 17:57:02.054753 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 17:57:02.054758 | orchestrator | Friday 29 August 2025 17:55:18 +0000 (0:00:01.789) 0:05:27.329 ********* 2025-08-29 17:57:02.054763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.054768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:57:02.054773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.054809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.054814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:57:02.054819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.054835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:57:02.054848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.054866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.054875 | orchestrator | 2025-08-29 17:57:02.054880 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 17:57:02.054884 | orchestrator | Friday 29 August 2025 17:55:22 +0000 (0:00:03.936) 0:05:31.265 ********* 2025-08-29 17:57:02.054905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.054914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:57:02.054921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.054935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.054952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.054974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:57:02.054987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.054995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.055003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.055011 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.055020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.055028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 17:57:02.055058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.055069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 17:57:02.055074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 17:57:02.055078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.055083 | orchestrator | 2025-08-29 17:57:02.055088 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 17:57:02.055092 | orchestrator | Friday 29 August 2025 17:55:23 +0000 (0:00:01.123) 0:05:32.389 ********* 2025-08-29 17:57:02.055097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:57:02.055102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:57:02.055106 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.055111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:57:02.055116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:57:02.055120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.055125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:57:02.055130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 17:57:02.055134 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.055139 | orchestrator | 2025-08-29 17:57:02.055143 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 17:57:02.055148 | orchestrator | Friday 29 August 2025 17:55:25 +0000 (0:00:01.353) 0:05:33.742 ********* 2025-08-29 17:57:02.055152 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.055157 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.055166 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.055171 | orchestrator | 2025-08-29 17:57:02.055175 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 17:57:02.055180 | orchestrator | Friday 29 August 2025 17:55:26 +0000 (0:00:01.413) 0:05:35.156 ********* 2025-08-29 17:57:02.055184 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.055189 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.055193 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.055198 | orchestrator | 2025-08-29 17:57:02.055202 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 17:57:02.055207 | orchestrator | Friday 29 August 2025 17:55:28 +0000 (0:00:02.231) 0:05:37.387 ********* 2025-08-29 17:57:02.055225 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.055230 | orchestrator | 2025-08-29 17:57:02.055235 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 17:57:02.055242 | orchestrator | Friday 29 August 2025 17:55:30 +0000 (0:00:01.898) 0:05:39.286 ********* 2025-08-29 17:57:02.055248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:57:02.055253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:57:02.055258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 17:57:02.055264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:57:02.055289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:57:02.055295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 17:57:02.055300 | orchestrator | 2025-08-29 17:57:02.055305 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 17:57:02.055324 | orchestrator | Friday 29 August 2025 17:55:36 +0000 (0:00:05.600) 0:05:44.887 ********* 2025-08-29 17:57:02.055329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:57:02.055338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:57:02.055343 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.055366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:57:02.055372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:57:02.055377 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.055382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 17:57:02.055391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 17:57:02.055396 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.055400 | orchestrator | 2025-08-29 17:57:02.055405 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 17:57:02.055409 | orchestrator | Friday 29 August 2025 17:55:37 +0000 (0:00:00.727) 0:05:45.614 ********* 2025-08-29 17:57:02.055427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 17:57:02.055435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 17:57:02.055440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:57:02.055448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:57:02.055453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:57:02.055460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:57:02.055465 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.055470 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.055474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 17:57:02.055479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:57:02.055483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 17:57:02.055488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.055493 | orchestrator | 2025-08-29 17:57:02.055497 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 17:57:02.055508 | orchestrator | Friday 29 August 2025 17:55:38 +0000 (0:00:01.823) 0:05:47.438 ********* 2025-08-29 17:57:02.055513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.055517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.055522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.055526 | orchestrator | 2025-08-29 17:57:02.055531 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 17:57:02.055535 | orchestrator | Friday 29 August 2025 17:55:39 +0000 (0:00:00.503) 0:05:47.941 ********* 2025-08-29 17:57:02.055540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.055544 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.055549 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.055553 | orchestrator | 2025-08-29 17:57:02.055557 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 17:57:02.055565 | orchestrator | Friday 29 August 2025 17:55:40 +0000 (0:00:01.512) 0:05:49.454 ********* 2025-08-29 17:57:02.055572 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.055580 | orchestrator | 2025-08-29 17:57:02.055586 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 17:57:02.055590 | orchestrator | Friday 29 August 2025 17:55:42 +0000 (0:00:01.798) 0:05:51.252 ********* 2025-08-29 17:57:02.055595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:57:02.055619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:57:02.055625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:57:02.055630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:57:02.055639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 17:57:02.055694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:57:02.055698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:57:02.055736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:57:02.055745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:57:02.055750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:57:02.055755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 17:57:02.055801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:57:02.055808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055825 | orchestrator | 2025-08-29 17:57:02.055830 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 17:57:02.055834 | orchestrator | Friday 29 August 2025 17:55:47 +0000 (0:00:04.579) 0:05:55.832 ********* 2025-08-29 17:57:02.055839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:57:02.055844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:57:02.055849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:57:02.055878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:57:02.055885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055912 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.055926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:57:02.055935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:57:02.055939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 17:57:02.055953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.055964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 17:57:02.055973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:57:02.055978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:57:02.055988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.055999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.056010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.056015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 17:57:02.056020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.056040 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 17:57:02.056050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.056055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 17:57:02.056068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 17:57:02.056073 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056078 | orchestrator | 2025-08-29 17:57:02.056082 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 17:57:02.056087 | orchestrator | Friday 29 August 2025 17:55:48 +0000 (0:00:00.965) 0:05:56.798 ********* 2025-08-29 17:57:02.056092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 17:57:02.056096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 17:57:02.056101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:57:02.056106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:57:02.056111 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 17:57:02.056120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 17:57:02.056125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:57:02.056130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:57:02.056135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 17:57:02.056139 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 17:57:02.056149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:57:02.056157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 17:57:02.056161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056166 | orchestrator | 2025-08-29 17:57:02.056171 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 17:57:02.056175 | orchestrator | Friday 29 August 2025 17:55:49 +0000 (0:00:01.323) 0:05:58.121 ********* 2025-08-29 17:57:02.056182 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056186 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056191 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056195 | orchestrator | 2025-08-29 17:57:02.056202 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 17:57:02.056207 | orchestrator | Friday 29 August 2025 17:55:50 +0000 (0:00:00.511) 0:05:58.633 ********* 2025-08-29 17:57:02.056211 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056221 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056225 | orchestrator | 2025-08-29 17:57:02.056229 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 17:57:02.056234 | orchestrator | Friday 29 August 2025 17:55:51 +0000 (0:00:01.544) 0:06:00.177 ********* 2025-08-29 17:57:02.056238 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.056243 | orchestrator | 2025-08-29 17:57:02.056247 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 17:57:02.056252 | orchestrator | Friday 29 August 2025 17:55:53 +0000 (0:00:01.563) 0:06:01.741 ********* 2025-08-29 17:57:02.056258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:57:02.056266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:57:02.056278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 17:57:02.056287 | orchestrator | 2025-08-29 17:57:02.056294 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 17:57:02.056302 | orchestrator | Friday 29 August 2025 17:55:55 +0000 (0:00:02.670) 0:06:04.411 ********* 2025-08-29 17:57:02.056352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 17:57:02.056358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 17:57:02.056368 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 17:57:02.056381 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056386 | orchestrator | 2025-08-29 17:57:02.056390 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 17:57:02.056395 | orchestrator | Friday 29 August 2025 17:55:56 +0000 (0:00:00.504) 0:06:04.916 ********* 2025-08-29 17:57:02.056400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 17:57:02.056404 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 17:57:02.056413 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 17:57:02.056422 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056427 | orchestrator | 2025-08-29 17:57:02.056431 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 17:57:02.056436 | orchestrator | Friday 29 August 2025 17:55:57 +0000 (0:00:00.737) 0:06:05.654 ********* 2025-08-29 17:57:02.056440 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056445 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056450 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056454 | orchestrator | 2025-08-29 17:57:02.056459 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 17:57:02.056465 | orchestrator | Friday 29 August 2025 17:55:58 +0000 (0:00:00.920) 0:06:06.574 ********* 2025-08-29 17:57:02.056470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056474 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056479 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056483 | orchestrator | 2025-08-29 17:57:02.056491 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 17:57:02.056495 | orchestrator | Friday 29 August 2025 17:55:59 +0000 (0:00:01.432) 0:06:08.006 ********* 2025-08-29 17:57:02.056500 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:57:02.056504 | orchestrator | 2025-08-29 17:57:02.056508 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 17:57:02.056512 | orchestrator | Friday 29 August 2025 17:56:01 +0000 (0:00:01.520) 0:06:09.527 ********* 2025-08-29 17:57:02.056516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.056521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.056530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.056539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.056544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.056548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 17:57:02.056556 | orchestrator | 2025-08-29 17:57:02.056560 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 17:57:02.056564 | orchestrator | Friday 29 August 2025 17:56:07 +0000 (0:00:06.916) 0:06:16.443 ********* 2025-08-29 17:57:02.056569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.056576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.056580 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.056591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.056598 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.056607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 17:57:02.056611 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056615 | orchestrator | 2025-08-29 17:57:02.056619 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 17:57:02.056626 | orchestrator | Friday 29 August 2025 17:56:08 +0000 (0:00:00.707) 0:06:17.151 ********* 2025-08-29 17:57:02.056630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056650 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056675 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 17:57:02.056692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056696 | orchestrator | 2025-08-29 17:57:02.056700 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 17:57:02.056704 | orchestrator | Friday 29 August 2025 17:56:09 +0000 (0:00:01.066) 0:06:18.217 ********* 2025-08-29 17:57:02.056708 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.056712 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.056716 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.056720 | orchestrator | 2025-08-29 17:57:02.056724 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 17:57:02.056758 | orchestrator | Friday 29 August 2025 17:56:11 +0000 (0:00:02.149) 0:06:20.367 ********* 2025-08-29 17:57:02.056769 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.056774 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.056778 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.056782 | orchestrator | 2025-08-29 17:57:02.056786 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 17:57:02.056790 | orchestrator | Friday 29 August 2025 17:56:14 +0000 (0:00:02.198) 0:06:22.566 ********* 2025-08-29 17:57:02.056794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056802 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056806 | orchestrator | 2025-08-29 17:57:02.056810 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 17:57:02.056814 | orchestrator | Friday 29 August 2025 17:56:14 +0000 (0:00:00.381) 0:06:22.947 ********* 2025-08-29 17:57:02.056819 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056823 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056827 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056831 | orchestrator | 2025-08-29 17:57:02.056838 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 17:57:02.056845 | orchestrator | Friday 29 August 2025 17:56:14 +0000 (0:00:00.380) 0:06:23.327 ********* 2025-08-29 17:57:02.056849 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056853 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056861 | orchestrator | 2025-08-29 17:57:02.056868 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 17:57:02.056872 | orchestrator | Friday 29 August 2025 17:56:15 +0000 (0:00:00.345) 0:06:23.673 ********* 2025-08-29 17:57:02.056876 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056888 | orchestrator | 2025-08-29 17:57:02.056892 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 17:57:02.056896 | orchestrator | Friday 29 August 2025 17:56:15 +0000 (0:00:00.744) 0:06:24.418 ********* 2025-08-29 17:57:02.056900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056904 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056908 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056912 | orchestrator | 2025-08-29 17:57:02.056916 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 17:57:02.056920 | orchestrator | Friday 29 August 2025 17:56:16 +0000 (0:00:00.338) 0:06:24.756 ********* 2025-08-29 17:57:02.056924 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.056928 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.056932 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.056936 | orchestrator | 2025-08-29 17:57:02.056940 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 17:57:02.056944 | orchestrator | Friday 29 August 2025 17:56:16 +0000 (0:00:00.582) 0:06:25.339 ********* 2025-08-29 17:57:02.056948 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.056952 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.056956 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.056961 | orchestrator | 2025-08-29 17:57:02.056965 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 17:57:02.056969 | orchestrator | Friday 29 August 2025 17:56:17 +0000 (0:00:01.041) 0:06:26.381 ********* 2025-08-29 17:57:02.056973 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.056977 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.056981 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.056985 | orchestrator | 2025-08-29 17:57:02.056989 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 17:57:02.056993 | orchestrator | Friday 29 August 2025 17:56:18 +0000 (0:00:00.401) 0:06:26.782 ********* 2025-08-29 17:57:02.056997 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057001 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057005 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057009 | orchestrator | 2025-08-29 17:57:02.057013 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 17:57:02.057017 | orchestrator | Friday 29 August 2025 17:56:19 +0000 (0:00:00.880) 0:06:27.662 ********* 2025-08-29 17:57:02.057021 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057025 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057029 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057033 | orchestrator | 2025-08-29 17:57:02.057037 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 17:57:02.057041 | orchestrator | Friday 29 August 2025 17:56:20 +0000 (0:00:00.940) 0:06:28.603 ********* 2025-08-29 17:57:02.057045 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057049 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057053 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057057 | orchestrator | 2025-08-29 17:57:02.057061 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 17:57:02.057068 | orchestrator | Friday 29 August 2025 17:56:21 +0000 (0:00:01.290) 0:06:29.893 ********* 2025-08-29 17:57:02.057073 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.057077 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.057081 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.057085 | orchestrator | 2025-08-29 17:57:02.057089 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 17:57:02.057093 | orchestrator | Friday 29 August 2025 17:56:26 +0000 (0:00:05.146) 0:06:35.040 ********* 2025-08-29 17:57:02.057097 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057101 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057105 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057109 | orchestrator | 2025-08-29 17:57:02.057113 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 17:57:02.057117 | orchestrator | Friday 29 August 2025 17:56:30 +0000 (0:00:03.913) 0:06:38.954 ********* 2025-08-29 17:57:02.057121 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.057125 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.057129 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.057133 | orchestrator | 2025-08-29 17:57:02.057137 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 17:57:02.057142 | orchestrator | Friday 29 August 2025 17:56:39 +0000 (0:00:09.460) 0:06:48.414 ********* 2025-08-29 17:57:02.057146 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057150 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057158 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057165 | orchestrator | 2025-08-29 17:57:02.057172 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 17:57:02.057180 | orchestrator | Friday 29 August 2025 17:56:44 +0000 (0:00:04.701) 0:06:53.116 ********* 2025-08-29 17:57:02.057188 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:57:02.057197 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:57:02.057203 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:57:02.057207 | orchestrator | 2025-08-29 17:57:02.057211 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 17:57:02.057215 | orchestrator | Friday 29 August 2025 17:56:54 +0000 (0:00:09.953) 0:07:03.069 ********* 2025-08-29 17:57:02.057219 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.057223 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.057227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.057231 | orchestrator | 2025-08-29 17:57:02.057235 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 17:57:02.057239 | orchestrator | Friday 29 August 2025 17:56:54 +0000 (0:00:00.419) 0:07:03.489 ********* 2025-08-29 17:57:02.057244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.057250 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.057254 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.057258 | orchestrator | 2025-08-29 17:57:02.057262 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 17:57:02.057269 | orchestrator | Friday 29 August 2025 17:56:55 +0000 (0:00:00.445) 0:07:03.935 ********* 2025-08-29 17:57:02.057273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.057280 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.057286 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.057292 | orchestrator | 2025-08-29 17:57:02.057299 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 17:57:02.057319 | orchestrator | Friday 29 August 2025 17:56:55 +0000 (0:00:00.425) 0:07:04.360 ********* 2025-08-29 17:57:02.057327 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.057334 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.057340 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.057347 | orchestrator | 2025-08-29 17:57:02.057353 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 17:57:02.057360 | orchestrator | Friday 29 August 2025 17:56:56 +0000 (0:00:00.758) 0:07:05.118 ********* 2025-08-29 17:57:02.057372 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.057379 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.057386 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.057393 | orchestrator | 2025-08-29 17:57:02.057401 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 17:57:02.057405 | orchestrator | Friday 29 August 2025 17:56:56 +0000 (0:00:00.383) 0:07:05.501 ********* 2025-08-29 17:57:02.057409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:57:02.057413 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:57:02.057417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:57:02.057421 | orchestrator | 2025-08-29 17:57:02.057426 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 17:57:02.057430 | orchestrator | Friday 29 August 2025 17:56:57 +0000 (0:00:00.395) 0:07:05.897 ********* 2025-08-29 17:57:02.057434 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057438 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057442 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057446 | orchestrator | 2025-08-29 17:57:02.057450 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 17:57:02.057454 | orchestrator | Friday 29 August 2025 17:56:58 +0000 (0:00:01.404) 0:07:07.301 ********* 2025-08-29 17:57:02.057458 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:57:02.057462 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:57:02.057466 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:57:02.057470 | orchestrator | 2025-08-29 17:57:02.057474 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:57:02.057479 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 17:57:02.057483 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 17:57:02.057488 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 17:57:02.057492 | orchestrator | 2025-08-29 17:57:02.057496 | orchestrator | 2025-08-29 17:57:02.057500 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:57:02.057504 | orchestrator | Friday 29 August 2025 17:57:00 +0000 (0:00:01.291) 0:07:08.593 ********* 2025-08-29 17:57:02.057508 | orchestrator | =============================================================================== 2025-08-29 17:57:02.057512 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.95s 2025-08-29 17:57:02.057516 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.46s 2025-08-29 17:57:02.057520 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 8.00s 2025-08-29 17:57:02.057524 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.92s 2025-08-29 17:57:02.057528 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.24s 2025-08-29 17:57:02.057532 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.61s 2025-08-29 17:57:02.057536 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.60s 2025-08-29 17:57:02.057540 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.23s 2025-08-29 17:57:02.057544 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.15s 2025-08-29 17:57:02.057549 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.12s 2025-08-29 17:57:02.057553 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.07s 2025-08-29 17:57:02.057557 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.92s 2025-08-29 17:57:02.057561 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.81s 2025-08-29 17:57:02.057565 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.70s 2025-08-29 17:57:02.057572 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.70s 2025-08-29 17:57:02.057576 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.61s 2025-08-29 17:57:02.057580 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.58s 2025-08-29 17:57:02.057584 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.38s 2025-08-29 17:57:02.057588 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.35s 2025-08-29 17:57:02.057592 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 4.17s 2025-08-29 17:57:02.057599 | orchestrator | 2025-08-29 17:57:02 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:02.057607 | orchestrator | 2025-08-29 17:57:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:05.069967 | orchestrator | 2025-08-29 17:57:05 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:05.070679 | orchestrator | 2025-08-29 17:57:05 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:05.071764 | orchestrator | 2025-08-29 17:57:05 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:05.071802 | orchestrator | 2025-08-29 17:57:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:08.109412 | orchestrator | 2025-08-29 17:57:08 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:08.110080 | orchestrator | 2025-08-29 17:57:08 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:08.111103 | orchestrator | 2025-08-29 17:57:08 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:08.111194 | orchestrator | 2025-08-29 17:57:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:11.152038 | orchestrator | 2025-08-29 17:57:11 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:11.152895 | orchestrator | 2025-08-29 17:57:11 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:11.153976 | orchestrator | 2025-08-29 17:57:11 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:11.154576 | orchestrator | 2025-08-29 17:57:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:14.203043 | orchestrator | 2025-08-29 17:57:14 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:14.207088 | orchestrator | 2025-08-29 17:57:14 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:14.208060 | orchestrator | 2025-08-29 17:57:14 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:14.208080 | orchestrator | 2025-08-29 17:57:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:17.262146 | orchestrator | 2025-08-29 17:57:17 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:17.263041 | orchestrator | 2025-08-29 17:57:17 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:17.266474 | orchestrator | 2025-08-29 17:57:17 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:17.266481 | orchestrator | 2025-08-29 17:57:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:20.307163 | orchestrator | 2025-08-29 17:57:20 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:20.310089 | orchestrator | 2025-08-29 17:57:20 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:20.311544 | orchestrator | 2025-08-29 17:57:20 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:20.311570 | orchestrator | 2025-08-29 17:57:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:23.359428 | orchestrator | 2025-08-29 17:57:23 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:23.359518 | orchestrator | 2025-08-29 17:57:23 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:23.360466 | orchestrator | 2025-08-29 17:57:23 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:23.360491 | orchestrator | 2025-08-29 17:57:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:26.399978 | orchestrator | 2025-08-29 17:57:26 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:26.400624 | orchestrator | 2025-08-29 17:57:26 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:26.402201 | orchestrator | 2025-08-29 17:57:26 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:26.402251 | orchestrator | 2025-08-29 17:57:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:29.441245 | orchestrator | 2025-08-29 17:57:29 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:29.441684 | orchestrator | 2025-08-29 17:57:29 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:29.442668 | orchestrator | 2025-08-29 17:57:29 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:29.442704 | orchestrator | 2025-08-29 17:57:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:32.489991 | orchestrator | 2025-08-29 17:57:32 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:32.491763 | orchestrator | 2025-08-29 17:57:32 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:32.492488 | orchestrator | 2025-08-29 17:57:32 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:32.492516 | orchestrator | 2025-08-29 17:57:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:35.535153 | orchestrator | 2025-08-29 17:57:35 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:35.535331 | orchestrator | 2025-08-29 17:57:35 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:35.537653 | orchestrator | 2025-08-29 17:57:35 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:35.537946 | orchestrator | 2025-08-29 17:57:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:38.585499 | orchestrator | 2025-08-29 17:57:38 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:38.585555 | orchestrator | 2025-08-29 17:57:38 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:38.585561 | orchestrator | 2025-08-29 17:57:38 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:38.585566 | orchestrator | 2025-08-29 17:57:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:41.640542 | orchestrator | 2025-08-29 17:57:41 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:41.650156 | orchestrator | 2025-08-29 17:57:41 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:41.652575 | orchestrator | 2025-08-29 17:57:41 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:41.653163 | orchestrator | 2025-08-29 17:57:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:44.707246 | orchestrator | 2025-08-29 17:57:44 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:44.708930 | orchestrator | 2025-08-29 17:57:44 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:44.714664 | orchestrator | 2025-08-29 17:57:44 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:44.714709 | orchestrator | 2025-08-29 17:57:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:47.772451 | orchestrator | 2025-08-29 17:57:47 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:47.772703 | orchestrator | 2025-08-29 17:57:47 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:47.773578 | orchestrator | 2025-08-29 17:57:47 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:47.773621 | orchestrator | 2025-08-29 17:57:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:50.817377 | orchestrator | 2025-08-29 17:57:50 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:50.819723 | orchestrator | 2025-08-29 17:57:50 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:50.821630 | orchestrator | 2025-08-29 17:57:50 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:50.821700 | orchestrator | 2025-08-29 17:57:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:53.862928 | orchestrator | 2025-08-29 17:57:53 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:53.864721 | orchestrator | 2025-08-29 17:57:53 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:53.866337 | orchestrator | 2025-08-29 17:57:53 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:53.866354 | orchestrator | 2025-08-29 17:57:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:56.914987 | orchestrator | 2025-08-29 17:57:56 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:56.917486 | orchestrator | 2025-08-29 17:57:56 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:56.920067 | orchestrator | 2025-08-29 17:57:56 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:56.920162 | orchestrator | 2025-08-29 17:57:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:57:59.972203 | orchestrator | 2025-08-29 17:57:59 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:57:59.975829 | orchestrator | 2025-08-29 17:57:59 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:57:59.978146 | orchestrator | 2025-08-29 17:57:59 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:57:59.978986 | orchestrator | 2025-08-29 17:57:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:03.035337 | orchestrator | 2025-08-29 17:58:03 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:03.038473 | orchestrator | 2025-08-29 17:58:03 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:03.040678 | orchestrator | 2025-08-29 17:58:03 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:03.040962 | orchestrator | 2025-08-29 17:58:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:06.086226 | orchestrator | 2025-08-29 17:58:06 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:06.086734 | orchestrator | 2025-08-29 17:58:06 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:06.088257 | orchestrator | 2025-08-29 17:58:06 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:06.088418 | orchestrator | 2025-08-29 17:58:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:09.129373 | orchestrator | 2025-08-29 17:58:09 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:09.130315 | orchestrator | 2025-08-29 17:58:09 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:09.132261 | orchestrator | 2025-08-29 17:58:09 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:09.132323 | orchestrator | 2025-08-29 17:58:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:12.180578 | orchestrator | 2025-08-29 17:58:12 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:12.182994 | orchestrator | 2025-08-29 17:58:12 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:12.185348 | orchestrator | 2025-08-29 17:58:12 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:12.185415 | orchestrator | 2025-08-29 17:58:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:15.230346 | orchestrator | 2025-08-29 17:58:15 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:15.232483 | orchestrator | 2025-08-29 17:58:15 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:15.234941 | orchestrator | 2025-08-29 17:58:15 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:15.234981 | orchestrator | 2025-08-29 17:58:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:18.281657 | orchestrator | 2025-08-29 17:58:18 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:18.283229 | orchestrator | 2025-08-29 17:58:18 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:18.284622 | orchestrator | 2025-08-29 17:58:18 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:18.284659 | orchestrator | 2025-08-29 17:58:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:21.343123 | orchestrator | 2025-08-29 17:58:21 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:21.345825 | orchestrator | 2025-08-29 17:58:21 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:21.351412 | orchestrator | 2025-08-29 17:58:21 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:21.351506 | orchestrator | 2025-08-29 17:58:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:24.403603 | orchestrator | 2025-08-29 17:58:24 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:24.405894 | orchestrator | 2025-08-29 17:58:24 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:24.407850 | orchestrator | 2025-08-29 17:58:24 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:24.407946 | orchestrator | 2025-08-29 17:58:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:27.453015 | orchestrator | 2025-08-29 17:58:27 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:27.454118 | orchestrator | 2025-08-29 17:58:27 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:27.457715 | orchestrator | 2025-08-29 17:58:27 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:27.457741 | orchestrator | 2025-08-29 17:58:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:30.514079 | orchestrator | 2025-08-29 17:58:30 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:30.515886 | orchestrator | 2025-08-29 17:58:30 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:30.519621 | orchestrator | 2025-08-29 17:58:30 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:30.519675 | orchestrator | 2025-08-29 17:58:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:33.569448 | orchestrator | 2025-08-29 17:58:33 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:33.570238 | orchestrator | 2025-08-29 17:58:33 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:33.571379 | orchestrator | 2025-08-29 17:58:33 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:33.571490 | orchestrator | 2025-08-29 17:58:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:36.644157 | orchestrator | 2025-08-29 17:58:36 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:36.646634 | orchestrator | 2025-08-29 17:58:36 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:36.649400 | orchestrator | 2025-08-29 17:58:36 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:36.649863 | orchestrator | 2025-08-29 17:58:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:39.708238 | orchestrator | 2025-08-29 17:58:39 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:39.708703 | orchestrator | 2025-08-29 17:58:39 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:39.710362 | orchestrator | 2025-08-29 17:58:39 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:39.711055 | orchestrator | 2025-08-29 17:58:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:42.766125 | orchestrator | 2025-08-29 17:58:42 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:42.766452 | orchestrator | 2025-08-29 17:58:42 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:42.767530 | orchestrator | 2025-08-29 17:58:42 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:42.767593 | orchestrator | 2025-08-29 17:58:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:45.821086 | orchestrator | 2025-08-29 17:58:45 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:45.822465 | orchestrator | 2025-08-29 17:58:45 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:45.824702 | orchestrator | 2025-08-29 17:58:45 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:45.824738 | orchestrator | 2025-08-29 17:58:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:48.866693 | orchestrator | 2025-08-29 17:58:48 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:48.868265 | orchestrator | 2025-08-29 17:58:48 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:48.870865 | orchestrator | 2025-08-29 17:58:48 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:48.871906 | orchestrator | 2025-08-29 17:58:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:51.933807 | orchestrator | 2025-08-29 17:58:51 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:51.935626 | orchestrator | 2025-08-29 17:58:51 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:51.937014 | orchestrator | 2025-08-29 17:58:51 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:51.937064 | orchestrator | 2025-08-29 17:58:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:54.977256 | orchestrator | 2025-08-29 17:58:54 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:54.979094 | orchestrator | 2025-08-29 17:58:54 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state STARTED 2025-08-29 17:58:54.981344 | orchestrator | 2025-08-29 17:58:54 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:54.981381 | orchestrator | 2025-08-29 17:58:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:58:58.038119 | orchestrator | 2025-08-29 17:58:58 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:58:58.039500 | orchestrator | 2025-08-29 17:58:58 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:58:58.045793 | orchestrator | 2025-08-29 17:58:58 | INFO  | Task ada08347-363b-4d69-a743-61abb5b0457b is in state SUCCESS 2025-08-29 17:58:58.047889 | orchestrator | 2025-08-29 17:58:58.047939 | orchestrator | 2025-08-29 17:58:58.047955 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 17:58:58.048112 | orchestrator | 2025-08-29 17:58:58.048124 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 17:58:58.048136 | orchestrator | Friday 29 August 2025 17:46:30 +0000 (0:00:01.101) 0:00:01.101 ********* 2025-08-29 17:58:58.048148 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.048161 | orchestrator | 2025-08-29 17:58:58.048172 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 17:58:58.048184 | orchestrator | Friday 29 August 2025 17:46:31 +0000 (0:00:01.347) 0:00:02.448 ********* 2025-08-29 17:58:58.048196 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.048208 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.048218 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.048633 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.048797 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.048810 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.048822 | orchestrator | 2025-08-29 17:58:58.048834 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 17:58:58.048846 | orchestrator | Friday 29 August 2025 17:46:34 +0000 (0:00:02.211) 0:00:04.660 ********* 2025-08-29 17:58:58.048903 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.048917 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.048930 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.048943 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.048955 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.048967 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.048981 | orchestrator | 2025-08-29 17:58:58.048994 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 17:58:58.049607 | orchestrator | Friday 29 August 2025 17:46:35 +0000 (0:00:01.161) 0:00:05.821 ********* 2025-08-29 17:58:58.049631 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.049703 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.049719 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.049730 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.049741 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.049752 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.049762 | orchestrator | 2025-08-29 17:58:58.049774 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 17:58:58.049785 | orchestrator | Friday 29 August 2025 17:46:36 +0000 (0:00:01.628) 0:00:07.450 ********* 2025-08-29 17:58:58.049796 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.049807 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.049818 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.049830 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.049841 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.049851 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.049862 | orchestrator | 2025-08-29 17:58:58.049873 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 17:58:58.049883 | orchestrator | Friday 29 August 2025 17:46:38 +0000 (0:00:01.157) 0:00:08.608 ********* 2025-08-29 17:58:58.049892 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.049989 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.050356 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.050391 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.050402 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.050413 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.050424 | orchestrator | 2025-08-29 17:58:58.050435 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 17:58:58.050446 | orchestrator | Friday 29 August 2025 17:46:38 +0000 (0:00:00.835) 0:00:09.444 ********* 2025-08-29 17:58:58.050456 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.050467 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.050479 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.050489 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.050500 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.050511 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.050523 | orchestrator | 2025-08-29 17:58:58.050535 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 17:58:58.050546 | orchestrator | Friday 29 August 2025 17:46:40 +0000 (0:00:01.338) 0:00:10.782 ********* 2025-08-29 17:58:58.050558 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.050570 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.050581 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.050592 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.050603 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.050614 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.050827 | orchestrator | 2025-08-29 17:58:58.050842 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 17:58:58.050853 | orchestrator | Friday 29 August 2025 17:46:41 +0000 (0:00:01.187) 0:00:11.970 ********* 2025-08-29 17:58:58.050864 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.050874 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.050885 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.051185 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.051197 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.051208 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.051380 | orchestrator | 2025-08-29 17:58:58.051396 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 17:58:58.051408 | orchestrator | Friday 29 August 2025 17:46:42 +0000 (0:00:01.437) 0:00:13.407 ********* 2025-08-29 17:58:58.051420 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:58:58.051431 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:58:58.051487 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:58:58.051517 | orchestrator | 2025-08-29 17:58:58.051529 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 17:58:58.051542 | orchestrator | Friday 29 August 2025 17:46:43 +0000 (0:00:00.881) 0:00:14.288 ********* 2025-08-29 17:58:58.051555 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.051567 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.051579 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.051590 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.051602 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.052133 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.052153 | orchestrator | 2025-08-29 17:58:58.052258 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 17:58:58.052293 | orchestrator | Friday 29 August 2025 17:46:46 +0000 (0:00:02.599) 0:00:16.888 ********* 2025-08-29 17:58:58.052305 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:58:58.052317 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:58:58.052328 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:58:58.052339 | orchestrator | 2025-08-29 17:58:58.052349 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 17:58:58.052360 | orchestrator | Friday 29 August 2025 17:46:51 +0000 (0:00:04.605) 0:00:21.493 ********* 2025-08-29 17:58:58.052371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:58:58.052383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:58:58.052393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:58:58.052404 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.052415 | orchestrator | 2025-08-29 17:58:58.052425 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 17:58:58.052437 | orchestrator | Friday 29 August 2025 17:46:52 +0000 (0:00:01.049) 0:00:22.543 ********* 2025-08-29 17:58:58.052450 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052464 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052476 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052488 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.052498 | orchestrator | 2025-08-29 17:58:58.052509 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 17:58:58.052520 | orchestrator | Friday 29 August 2025 17:46:53 +0000 (0:00:01.591) 0:00:24.134 ********* 2025-08-29 17:58:58.052533 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052547 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052582 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.052592 | orchestrator | 2025-08-29 17:58:58.052612 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 17:58:58.052622 | orchestrator | Friday 29 August 2025 17:46:54 +0000 (0:00:00.630) 0:00:24.764 ********* 2025-08-29 17:58:58.052634 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 17:46:47.645519', 'end': '2025-08-29 17:46:47.942622', 'delta': '0:00:00.297103', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052729 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 17:46:49.422786', 'end': '2025-08-29 17:46:49.729128', 'delta': '0:00:00.306342', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052744 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 17:46:50.457821', 'end': '2025-08-29 17:46:50.742673', 'delta': '0:00:00.284852', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.052756 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.052766 | orchestrator | 2025-08-29 17:58:58.052777 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 17:58:58.052787 | orchestrator | Friday 29 August 2025 17:46:54 +0000 (0:00:00.264) 0:00:25.028 ********* 2025-08-29 17:58:58.052797 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.052807 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.052817 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.052826 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.052836 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.052846 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.052857 | orchestrator | 2025-08-29 17:58:58.052867 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 17:58:58.052877 | orchestrator | Friday 29 August 2025 17:46:58 +0000 (0:00:03.510) 0:00:28.539 ********* 2025-08-29 17:58:58.052887 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.052905 | orchestrator | 2025-08-29 17:58:58.052915 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 17:58:58.052926 | orchestrator | Friday 29 August 2025 17:46:58 +0000 (0:00:00.651) 0:00:29.190 ********* 2025-08-29 17:58:58.052936 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.052946 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.052956 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.052966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.052976 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.052987 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.052997 | orchestrator | 2025-08-29 17:58:58.053007 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 17:58:58.053017 | orchestrator | Friday 29 August 2025 17:47:00 +0000 (0:00:01.835) 0:00:31.026 ********* 2025-08-29 17:58:58.053027 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053073 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053083 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053093 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053103 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053113 | orchestrator | 2025-08-29 17:58:58.053123 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 17:58:58.053134 | orchestrator | Friday 29 August 2025 17:47:03 +0000 (0:00:03.134) 0:00:34.161 ********* 2025-08-29 17:58:58.053143 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053153 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053163 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053174 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053184 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053194 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053204 | orchestrator | 2025-08-29 17:58:58.053221 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 17:58:58.053231 | orchestrator | Friday 29 August 2025 17:47:07 +0000 (0:00:03.424) 0:00:37.586 ********* 2025-08-29 17:58:58.053241 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053251 | orchestrator | 2025-08-29 17:58:58.053261 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 17:58:58.053293 | orchestrator | Friday 29 August 2025 17:47:07 +0000 (0:00:00.499) 0:00:38.085 ********* 2025-08-29 17:58:58.053303 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053314 | orchestrator | 2025-08-29 17:58:58.053324 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 17:58:58.053334 | orchestrator | Friday 29 August 2025 17:47:08 +0000 (0:00:00.574) 0:00:38.660 ********* 2025-08-29 17:58:58.053344 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053354 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053376 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053387 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053398 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053408 | orchestrator | 2025-08-29 17:58:58.053419 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 17:58:58.053500 | orchestrator | Friday 29 August 2025 17:47:09 +0000 (0:00:01.192) 0:00:39.852 ********* 2025-08-29 17:58:58.053513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053524 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053534 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053545 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053556 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053567 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053577 | orchestrator | 2025-08-29 17:58:58.053588 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 17:58:58.053598 | orchestrator | Friday 29 August 2025 17:47:11 +0000 (0:00:01.825) 0:00:41.678 ********* 2025-08-29 17:58:58.053618 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053628 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053638 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053649 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053659 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053669 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053679 | orchestrator | 2025-08-29 17:58:58.053690 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 17:58:58.053700 | orchestrator | Friday 29 August 2025 17:47:12 +0000 (0:00:01.324) 0:00:43.002 ********* 2025-08-29 17:58:58.053711 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053721 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053731 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053741 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053751 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053760 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053770 | orchestrator | 2025-08-29 17:58:58.053780 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 17:58:58.053789 | orchestrator | Friday 29 August 2025 17:47:13 +0000 (0:00:01.360) 0:00:44.362 ********* 2025-08-29 17:58:58.053799 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053808 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053818 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053837 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053846 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053856 | orchestrator | 2025-08-29 17:58:58.053865 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 17:58:58.053876 | orchestrator | Friday 29 August 2025 17:47:15 +0000 (0:00:01.483) 0:00:45.845 ********* 2025-08-29 17:58:58.053885 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053894 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053913 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.053922 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.053931 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.053940 | orchestrator | 2025-08-29 17:58:58.053950 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 17:58:58.053960 | orchestrator | Friday 29 August 2025 17:47:17 +0000 (0:00:01.803) 0:00:47.650 ********* 2025-08-29 17:58:58.053969 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.053979 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.053988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.053998 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.054007 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.054056 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.054067 | orchestrator | 2025-08-29 17:58:58.054077 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 17:58:58.054086 | orchestrator | Friday 29 August 2025 17:47:18 +0000 (0:00:01.589) 0:00:49.239 ********* 2025-08-29 17:58:58.054097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.054437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.054450 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.054460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part1', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part14', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part15', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part16', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.054639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.054674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.054685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part1', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part14', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part15', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part16', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.054955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.054970 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.054981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd', 'dm-uuid-LVM-OqDG69t2vDaMZOSVNYzQsHcamcItuLTl1BlHeYkcX7dm3chbRI1wtvAKHp0WLUD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.054995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129', 'dm-uuid-LVM-ULB2gWLlz2AdGy8HiFWlMZDHhaZvCU06Fl3MfVfjSLpPZ9EuBrU7lFdIGZEopowg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2', 'dm-uuid-LVM-u9knlHc70OesxONFTpvJpQrQMt493OzUXELOIcRt1U0MLaOUw5bOgGqcmYktu9JG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca', 'dm-uuid-LVM-GeQfWNL5PTOhGNNfRWS0IbIydprklRI12ZL8udWoflwZgPkVZQjQdRuNlD9nJ5hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12', 'dm-uuid-LVM-fcIw1H3lu8i6pMvymK1dZFlPy4lkQ9ZNvGa0GU49Ovc7OUkZWQpmdJqvrMMIZdlM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jq6FGk-VWre-Zblz-qi7M-NFUu-gpED-HalvBt', 'scsi-0QEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae', 'scsi-SQEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4', 'dm-uuid-LVM-ZPM3oy9rFQdt2qS5meKQf8Sb5LM8gmm9dE2KuxJJLUNJZ3q9zc5er2Wc2d9c9yBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-euxw4P-W6xv-792Y-3K4Q-DM05-27QV-XtcBBi', 'scsi-0QEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f', 'scsi-SQEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lDIUPJ-Cz7F-HF3P-wwdB-p9MW-Kng2-2lXYh8', 'scsi-0QEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87', 'scsi-SQEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e', 'scsi-SQEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055689 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.055705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xy292o-a1aF-88n0-5PuI-v5n4-SSXU-DAhGVS', 'scsi-0QEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff', 'scsi-SQEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217', 'scsi-SQEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.055802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055813 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.055823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 17:58:58.055973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.056000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iS2ciG-R3is-3hmZ-FLZL-azvV-f5Rp-K1AJgY', 'scsi-0QEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c', 'scsi-SQEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.056010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oUaaae-JHpJ-FipB-6T3E-vLiy-UwcG-Zeom8E', 'scsi-0QEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527', 'scsi-SQEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.056024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0', 'scsi-SQEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.056034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 17:58:58.056101 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.056115 | orchestrator | 2025-08-29 17:58:58.056124 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 17:58:58.056133 | orchestrator | Friday 29 August 2025 17:47:22 +0000 (0:00:03.489) 0:00:52.729 ********* 2025-08-29 17:58:58.056142 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056161 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056170 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056180 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056193 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056203 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056343 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056359 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056378 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056388 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056397 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056412 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056480 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056496 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part1', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part14', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part15', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part16', 'scsi-SQEMU_QEMU_HARDDISK_53d0bb56-43d8-4988-b520-f0487c65e4d2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056514 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056529 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056596 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056618 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.056643 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056654 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056674 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056689 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056700 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056765 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056783 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056798 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee02ac66-7081-4f67-9e89-908cf88442b2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056809 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-01-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056908 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part1', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part14', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part15', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part16', 'scsi-SQEMU_QEMU_HARDDISK_82ede1f5-1152-49fb-8657-6e3d9aa6c6b6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056923 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.056983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd', 'dm-uuid-LVM-OqDG69t2vDaMZOSVNYzQsHcamcItuLTl1BlHeYkcX7dm3chbRI1wtvAKHp0WLUD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.057016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca', 'dm-uuid-LVM-GeQfWNL5PTOhGNNfRWS0IbIydprklRI12ZL8udWoflwZgPkVZQjQdRuNlD9nJ5hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057025 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.057035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129', 'dm-uuid-LVM-ULB2gWLlz2AdGy8HiFWlMZDHhaZvCU06Fl3MfVfjSLpPZ9EuBrU7lFdIGZEopowg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2', 'dm-uuid-LVM-u9knlHc70OesxONFTpvJpQrQMt493OzUXELOIcRt1U0MLaOUw5bOgGqcmYktu9JG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12', 'dm-uuid-LVM-fcIw1H3lu8i6pMvymK1dZFlPy4lkQ9ZNvGa0GU49Ovc7OUkZWQpmdJqvrMMIZdlM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057203 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4', 'dm-uuid-LVM-ZPM3oy9rFQdt2qS5meKQf8Sb5LM8gmm9dE2KuxJJLUNJZ3q9zc5er2Wc2d9c9yBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057265 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057350 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057444 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057571 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jq6FGk-VWre-Zblz-qi7M-NFUu-gpED-HalvBt', 'scsi-0QEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae', 'scsi-SQEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057642 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xy292o-a1aF-88n0-5PuI-v5n4-SSXU-DAhGVS', 'scsi-0QEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff', 'scsi-SQEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217', 'scsi-SQEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057719 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-euxw4P-W6xv-792Y-3K4Q-DM05-27QV-XtcBBi', 'scsi-0QEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f', 'scsi-SQEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057734 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.057827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057845 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lDIUPJ-Cz7F-HF3P-wwdB-p9MW-Kng2-2lXYh8', 'scsi-0QEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87', 'scsi-SQEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057858 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iS2ciG-R3is-3hmZ-FLZL-azvV-f5Rp-K1AJgY', 'scsi-0QEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c', 'scsi-SQEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oUaaae-JHpJ-FipB-6T3E-vLiy-UwcG-Zeom8E', 'scsi-0QEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527', 'scsi-SQEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057926 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0', 'scsi-SQEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e', 'scsi-SQEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 17:58:58.057956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.057961 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.057967 | orchestrator | 2025-08-29 17:58:58.057972 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 17:58:58.057977 | orchestrator | Friday 29 August 2025 17:47:23 +0000 (0:00:01.491) 0:00:54.220 ********* 2025-08-29 17:58:58.057982 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.057987 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.057992 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.058077 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.058086 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.058092 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.058096 | orchestrator | 2025-08-29 17:58:58.058102 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 17:58:58.058107 | orchestrator | Friday 29 August 2025 17:47:26 +0000 (0:00:02.834) 0:00:57.055 ********* 2025-08-29 17:58:58.058111 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.058116 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.058121 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.058126 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.058131 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.058135 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.058140 | orchestrator | 2025-08-29 17:58:58.058145 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 17:58:58.058150 | orchestrator | Friday 29 August 2025 17:47:27 +0000 (0:00:01.032) 0:00:58.088 ********* 2025-08-29 17:58:58.058155 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.058160 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.058164 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.058169 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058174 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058179 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058184 | orchestrator | 2025-08-29 17:58:58.058188 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 17:58:58.058193 | orchestrator | Friday 29 August 2025 17:47:28 +0000 (0:00:00.942) 0:00:59.030 ********* 2025-08-29 17:58:58.058198 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.058203 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.058208 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.058212 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058217 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058222 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058227 | orchestrator | 2025-08-29 17:58:58.058232 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 17:58:58.058240 | orchestrator | Friday 29 August 2025 17:47:29 +0000 (0:00:01.191) 0:01:00.222 ********* 2025-08-29 17:58:58.058254 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.058262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.058284 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.058291 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058299 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058306 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058325 | orchestrator | 2025-08-29 17:58:58.058333 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 17:58:58.058341 | orchestrator | Friday 29 August 2025 17:47:31 +0000 (0:00:01.569) 0:01:01.792 ********* 2025-08-29 17:58:58.058349 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.058356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.058364 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.058373 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058378 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058382 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058387 | orchestrator | 2025-08-29 17:58:58.058392 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 17:58:58.058397 | orchestrator | Friday 29 August 2025 17:47:32 +0000 (0:00:01.068) 0:01:02.860 ********* 2025-08-29 17:58:58.058402 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:58:58.058407 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 17:58:58.058412 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 17:58:58.058416 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 17:58:58.058421 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 17:58:58.058426 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 17:58:58.058431 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 17:58:58.058435 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 17:58:58.058440 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 17:58:58.058445 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 17:58:58.058450 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 17:58:58.058454 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 17:58:58.058459 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 17:58:58.058464 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 17:58:58.058469 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 17:58:58.058473 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 17:58:58.058478 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 17:58:58.058483 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 17:58:58.058487 | orchestrator | 2025-08-29 17:58:58.058496 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 17:58:58.058503 | orchestrator | Friday 29 August 2025 17:47:36 +0000 (0:00:04.252) 0:01:07.113 ********* 2025-08-29 17:58:58.058511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:58:58.058519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:58:58.058526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:58:58.058534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.058541 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 17:58:58.058548 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 17:58:58.058556 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 17:58:58.058563 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.058570 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 17:58:58.058578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 17:58:58.058586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 17:58:58.058602 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.058638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 17:58:58.058644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 17:58:58.058649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 17:58:58.058654 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 17:58:58.058663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 17:58:58.058668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 17:58:58.058674 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 17:58:58.058685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 17:58:58.058690 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 17:58:58.058695 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058701 | orchestrator | 2025-08-29 17:58:58.058706 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 17:58:58.058712 | orchestrator | Friday 29 August 2025 17:47:38 +0000 (0:00:01.443) 0:01:08.556 ********* 2025-08-29 17:58:58.058717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.058722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.058728 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.058734 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.058740 | orchestrator | 2025-08-29 17:58:58.058745 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 17:58:58.058752 | orchestrator | Friday 29 August 2025 17:47:40 +0000 (0:00:01.985) 0:01:10.542 ********* 2025-08-29 17:58:58.058757 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058763 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058768 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058774 | orchestrator | 2025-08-29 17:58:58.058779 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 17:58:58.058785 | orchestrator | Friday 29 August 2025 17:47:40 +0000 (0:00:00.654) 0:01:11.197 ********* 2025-08-29 17:58:58.058791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058796 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058802 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058807 | orchestrator | 2025-08-29 17:58:58.058813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 17:58:58.058818 | orchestrator | Friday 29 August 2025 17:47:42 +0000 (0:00:01.346) 0:01:12.543 ********* 2025-08-29 17:58:58.058823 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.058832 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.058837 | orchestrator | 2025-08-29 17:58:58.058842 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 17:58:58.058846 | orchestrator | Friday 29 August 2025 17:47:43 +0000 (0:00:01.094) 0:01:13.638 ********* 2025-08-29 17:58:58.058851 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.058856 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.058861 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.058866 | orchestrator | 2025-08-29 17:58:58.058870 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 17:58:58.058875 | orchestrator | Friday 29 August 2025 17:47:44 +0000 (0:00:01.122) 0:01:14.760 ********* 2025-08-29 17:58:58.058880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.058885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.058889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.058898 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058903 | orchestrator | 2025-08-29 17:58:58.058908 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 17:58:58.058912 | orchestrator | Friday 29 August 2025 17:47:44 +0000 (0:00:00.474) 0:01:15.235 ********* 2025-08-29 17:58:58.058917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.058922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.058926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.058931 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058936 | orchestrator | 2025-08-29 17:58:58.058941 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 17:58:58.058945 | orchestrator | Friday 29 August 2025 17:47:45 +0000 (0:00:00.488) 0:01:15.723 ********* 2025-08-29 17:58:58.058953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.058958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.058963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.058968 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.058972 | orchestrator | 2025-08-29 17:58:58.058977 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 17:58:58.058982 | orchestrator | Friday 29 August 2025 17:47:45 +0000 (0:00:00.542) 0:01:16.266 ********* 2025-08-29 17:58:58.058987 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.058991 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.058996 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059001 | orchestrator | 2025-08-29 17:58:58.059005 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 17:58:58.059010 | orchestrator | Friday 29 August 2025 17:47:46 +0000 (0:00:00.827) 0:01:17.093 ********* 2025-08-29 17:58:58.059015 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 17:58:58.059020 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 17:58:58.059024 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 17:58:58.059029 | orchestrator | 2025-08-29 17:58:58.059034 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 17:58:58.059039 | orchestrator | Friday 29 August 2025 17:47:48 +0000 (0:00:01.720) 0:01:18.813 ********* 2025-08-29 17:58:58.059059 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:58:58.059065 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:58:58.059070 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:58:58.059075 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 17:58:58.059079 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 17:58:58.059084 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 17:58:58.059089 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 17:58:58.059094 | orchestrator | 2025-08-29 17:58:58.059098 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 17:58:58.059103 | orchestrator | Friday 29 August 2025 17:47:49 +0000 (0:00:01.267) 0:01:20.081 ********* 2025-08-29 17:58:58.059108 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:58:58.059113 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:58:58.059117 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:58:58.059122 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-08-29 17:58:58.059127 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 17:58:58.059131 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 17:58:58.059140 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 17:58:58.059145 | orchestrator | 2025-08-29 17:58:58.059150 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.059154 | orchestrator | Friday 29 August 2025 17:47:52 +0000 (0:00:03.182) 0:01:23.263 ********* 2025-08-29 17:58:58.059160 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.059166 | orchestrator | 2025-08-29 17:58:58.059171 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.059176 | orchestrator | Friday 29 August 2025 17:47:54 +0000 (0:00:01.799) 0:01:25.063 ********* 2025-08-29 17:58:58.059180 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.059185 | orchestrator | 2025-08-29 17:58:58.059190 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.059195 | orchestrator | Friday 29 August 2025 17:47:56 +0000 (0:00:01.511) 0:01:26.574 ********* 2025-08-29 17:58:58.059199 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.059204 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.059209 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.059213 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.059218 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.059223 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.059228 | orchestrator | 2025-08-29 17:58:58.059232 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.059237 | orchestrator | Friday 29 August 2025 17:47:57 +0000 (0:00:01.404) 0:01:27.979 ********* 2025-08-29 17:58:58.059242 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059247 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059251 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059256 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059261 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059279 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059286 | orchestrator | 2025-08-29 17:58:58.059290 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.059295 | orchestrator | Friday 29 August 2025 17:47:58 +0000 (0:00:01.376) 0:01:29.355 ********* 2025-08-29 17:58:58.059300 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059304 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059309 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059314 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059318 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059323 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059328 | orchestrator | 2025-08-29 17:58:58.059335 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.059340 | orchestrator | Friday 29 August 2025 17:48:00 +0000 (0:00:01.837) 0:01:31.193 ********* 2025-08-29 17:58:58.059345 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059355 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059359 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059364 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059369 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059373 | orchestrator | 2025-08-29 17:58:58.059378 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.059383 | orchestrator | Friday 29 August 2025 17:48:02 +0000 (0:00:01.461) 0:01:32.654 ********* 2025-08-29 17:58:58.059388 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.059392 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.059397 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.059408 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.059413 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.059418 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.059423 | orchestrator | 2025-08-29 17:58:58.059427 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.059432 | orchestrator | Friday 29 August 2025 17:48:03 +0000 (0:00:01.481) 0:01:34.136 ********* 2025-08-29 17:58:58.059453 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059459 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059463 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059468 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.059473 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.059478 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.059482 | orchestrator | 2025-08-29 17:58:58.059487 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.059492 | orchestrator | Friday 29 August 2025 17:48:04 +0000 (0:00:01.005) 0:01:35.141 ********* 2025-08-29 17:58:58.059497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059501 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059506 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059511 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.059515 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.059520 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.059525 | orchestrator | 2025-08-29 17:58:58.059530 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.059534 | orchestrator | Friday 29 August 2025 17:48:05 +0000 (0:00:01.078) 0:01:36.220 ********* 2025-08-29 17:58:58.059539 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.059544 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.059550 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.059559 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059568 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059577 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059585 | orchestrator | 2025-08-29 17:58:58.059594 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.059602 | orchestrator | Friday 29 August 2025 17:48:07 +0000 (0:00:01.541) 0:01:37.762 ********* 2025-08-29 17:58:58.059611 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.059620 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.059628 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.059636 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059644 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059653 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059661 | orchestrator | 2025-08-29 17:58:58.059669 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.059678 | orchestrator | Friday 29 August 2025 17:48:09 +0000 (0:00:02.207) 0:01:39.969 ********* 2025-08-29 17:58:58.059687 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059696 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059713 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.059722 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.059732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.059741 | orchestrator | 2025-08-29 17:58:58.059751 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.059761 | orchestrator | Friday 29 August 2025 17:48:10 +0000 (0:00:00.644) 0:01:40.614 ********* 2025-08-29 17:58:58.059770 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.059780 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.059790 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.059800 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.059809 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.059819 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.059828 | orchestrator | 2025-08-29 17:58:58.059838 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.059854 | orchestrator | Friday 29 August 2025 17:48:11 +0000 (0:00:01.177) 0:01:41.792 ********* 2025-08-29 17:58:58.059864 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059891 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059901 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059910 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.059919 | orchestrator | 2025-08-29 17:58:58.059928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.059937 | orchestrator | Friday 29 August 2025 17:48:12 +0000 (0:00:01.002) 0:01:42.794 ********* 2025-08-29 17:58:58.059947 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.059956 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.059965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.059974 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.059983 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.059992 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.060002 | orchestrator | 2025-08-29 17:58:58.060011 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.060020 | orchestrator | Friday 29 August 2025 17:48:13 +0000 (0:00:01.459) 0:01:44.254 ********* 2025-08-29 17:58:58.060030 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.060039 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.060049 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.060058 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.060067 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.060076 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.060085 | orchestrator | 2025-08-29 17:58:58.060103 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.060113 | orchestrator | Friday 29 August 2025 17:48:14 +0000 (0:00:00.722) 0:01:44.976 ********* 2025-08-29 17:58:58.060122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.060131 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.060139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.060149 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.060159 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.060168 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.060177 | orchestrator | 2025-08-29 17:58:58.060187 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.060196 | orchestrator | Friday 29 August 2025 17:48:15 +0000 (0:00:00.942) 0:01:45.918 ********* 2025-08-29 17:58:58.060206 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.060216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.060226 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.060236 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.060246 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.060255 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.060309 | orchestrator | 2025-08-29 17:58:58.060322 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.060367 | orchestrator | Friday 29 August 2025 17:48:16 +0000 (0:00:00.745) 0:01:46.664 ********* 2025-08-29 17:58:58.060378 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.060387 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.060396 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.060405 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.060413 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.060422 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.060431 | orchestrator | 2025-08-29 17:58:58.060440 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.060449 | orchestrator | Friday 29 August 2025 17:48:17 +0000 (0:00:00.937) 0:01:47.601 ********* 2025-08-29 17:58:58.060458 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.060466 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.060493 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.060502 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.060510 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.060519 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.060528 | orchestrator | 2025-08-29 17:58:58.060536 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.060545 | orchestrator | Friday 29 August 2025 17:48:18 +0000 (0:00:01.229) 0:01:48.831 ********* 2025-08-29 17:58:58.060554 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.060562 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.060571 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.060581 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.060590 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.060599 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.060607 | orchestrator | 2025-08-29 17:58:58.060616 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 17:58:58.060625 | orchestrator | Friday 29 August 2025 17:48:19 +0000 (0:00:01.429) 0:01:50.260 ********* 2025-08-29 17:58:58.060634 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.060643 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.060652 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.060660 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.060669 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.060679 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.060688 | orchestrator | 2025-08-29 17:58:58.060696 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 17:58:58.060705 | orchestrator | Friday 29 August 2025 17:48:21 +0000 (0:00:01.949) 0:01:52.209 ********* 2025-08-29 17:58:58.060713 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.060721 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.060730 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.060738 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.060746 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.060755 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.060763 | orchestrator | 2025-08-29 17:58:58.060772 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 17:58:58.060780 | orchestrator | Friday 29 August 2025 17:48:23 +0000 (0:00:01.980) 0:01:54.189 ********* 2025-08-29 17:58:58.060790 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.060800 | orchestrator | 2025-08-29 17:58:58.060809 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 17:58:58.060818 | orchestrator | Friday 29 August 2025 17:48:25 +0000 (0:00:01.301) 0:01:55.490 ********* 2025-08-29 17:58:58.060827 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.060836 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.060845 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.060854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.060863 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.060872 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.060882 | orchestrator | 2025-08-29 17:58:58.060890 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 17:58:58.060899 | orchestrator | Friday 29 August 2025 17:48:25 +0000 (0:00:00.887) 0:01:56.378 ********* 2025-08-29 17:58:58.060908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.060917 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.060926 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.060935 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.060944 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.060953 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.060961 | orchestrator | 2025-08-29 17:58:58.060969 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 17:58:58.060983 | orchestrator | Friday 29 August 2025 17:48:26 +0000 (0:00:00.637) 0:01:57.015 ********* 2025-08-29 17:58:58.060991 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:58:58.060999 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:58:58.061017 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:58:58.061026 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:58:58.061034 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:58:58.061043 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:58:58.061053 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:58:58.061062 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:58:58.061070 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 17:58:58.061079 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:58:58.061087 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:58:58.061096 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 17:58:58.061105 | orchestrator | 2025-08-29 17:58:58.061147 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 17:58:58.061157 | orchestrator | Friday 29 August 2025 17:48:28 +0000 (0:00:01.659) 0:01:58.675 ********* 2025-08-29 17:58:58.061166 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.061175 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.061183 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.061195 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.061206 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.061215 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.061224 | orchestrator | 2025-08-29 17:58:58.061233 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 17:58:58.061242 | orchestrator | Friday 29 August 2025 17:48:29 +0000 (0:00:01.074) 0:01:59.750 ********* 2025-08-29 17:58:58.061250 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.061259 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.061295 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.061304 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.061313 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.061322 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.061331 | orchestrator | 2025-08-29 17:58:58.061339 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 17:58:58.061348 | orchestrator | Friday 29 August 2025 17:48:30 +0000 (0:00:00.969) 0:02:00.720 ********* 2025-08-29 17:58:58.061356 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.061364 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.061373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.061381 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.061390 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.061398 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.061406 | orchestrator | 2025-08-29 17:58:58.061415 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 17:58:58.061423 | orchestrator | Friday 29 August 2025 17:48:31 +0000 (0:00:00.779) 0:02:01.499 ********* 2025-08-29 17:58:58.061431 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.061440 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.061449 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.061457 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.061465 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.061474 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.061489 | orchestrator | 2025-08-29 17:58:58.061497 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 17:58:58.061505 | orchestrator | Friday 29 August 2025 17:48:31 +0000 (0:00:00.878) 0:02:02.377 ********* 2025-08-29 17:58:58.061514 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.061523 | orchestrator | 2025-08-29 17:58:58.061531 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 17:58:58.061540 | orchestrator | Friday 29 August 2025 17:48:33 +0000 (0:00:01.367) 0:02:03.745 ********* 2025-08-29 17:58:58.061548 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.061557 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.061565 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.061573 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.061581 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.061590 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.061597 | orchestrator | 2025-08-29 17:58:58.061606 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 17:58:58.061614 | orchestrator | Friday 29 August 2025 17:49:52 +0000 (0:01:19.351) 0:03:23.096 ********* 2025-08-29 17:58:58.061623 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:58:58.061631 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:58:58.061639 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:58:58.061648 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.061656 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:58:58.061664 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:58:58.061673 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:58:58.061681 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.061689 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:58:58.061697 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:58:58.061710 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:58:58.061719 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.061728 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:58:58.061736 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:58:58.061745 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:58:58.061753 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.061762 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:58:58.061770 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:58:58.061779 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:58:58.061787 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.061796 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 17:58:58.061804 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 17:58:58.061813 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 17:58:58.061851 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.061860 | orchestrator | 2025-08-29 17:58:58.061869 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 17:58:58.061877 | orchestrator | Friday 29 August 2025 17:49:53 +0000 (0:00:01.243) 0:03:24.340 ********* 2025-08-29 17:58:58.061886 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.061901 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.061909 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.061918 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.061926 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.061935 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.061943 | orchestrator | 2025-08-29 17:58:58.061952 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 17:58:58.061960 | orchestrator | Friday 29 August 2025 17:49:54 +0000 (0:00:01.023) 0:03:25.364 ********* 2025-08-29 17:58:58.061969 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.061978 | orchestrator | 2025-08-29 17:58:58.061987 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 17:58:58.061996 | orchestrator | Friday 29 August 2025 17:49:55 +0000 (0:00:00.289) 0:03:25.653 ********* 2025-08-29 17:58:58.062005 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062038 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062050 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062059 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062068 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062077 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062086 | orchestrator | 2025-08-29 17:58:58.062095 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 17:58:58.062104 | orchestrator | Friday 29 August 2025 17:49:56 +0000 (0:00:01.091) 0:03:26.745 ********* 2025-08-29 17:58:58.062113 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062122 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062131 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062140 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062149 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062158 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062167 | orchestrator | 2025-08-29 17:58:58.062176 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 17:58:58.062185 | orchestrator | Friday 29 August 2025 17:49:57 +0000 (0:00:00.870) 0:03:27.616 ********* 2025-08-29 17:58:58.062193 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062202 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062211 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062220 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062229 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062238 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062247 | orchestrator | 2025-08-29 17:58:58.062256 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 17:58:58.062282 | orchestrator | Friday 29 August 2025 17:49:58 +0000 (0:00:01.245) 0:03:28.861 ********* 2025-08-29 17:58:58.062292 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.062301 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.062310 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.062319 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.062328 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.062337 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.062346 | orchestrator | 2025-08-29 17:58:58.062354 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 17:58:58.062363 | orchestrator | Friday 29 August 2025 17:50:01 +0000 (0:00:02.999) 0:03:31.861 ********* 2025-08-29 17:58:58.062372 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.062381 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.062389 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.062399 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.062407 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.062415 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.062424 | orchestrator | 2025-08-29 17:58:58.062432 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 17:58:58.062440 | orchestrator | Friday 29 August 2025 17:50:02 +0000 (0:00:00.880) 0:03:32.741 ********* 2025-08-29 17:58:58.062455 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.062465 | orchestrator | 2025-08-29 17:58:58.062473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 17:58:58.062481 | orchestrator | Friday 29 August 2025 17:50:03 +0000 (0:00:01.239) 0:03:33.981 ********* 2025-08-29 17:58:58.062489 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062498 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062506 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062522 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062534 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062543 | orchestrator | 2025-08-29 17:58:58.062551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 17:58:58.062559 | orchestrator | Friday 29 August 2025 17:50:04 +0000 (0:00:00.760) 0:03:34.741 ********* 2025-08-29 17:58:58.062569 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062577 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062587 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062596 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062614 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062622 | orchestrator | 2025-08-29 17:58:58.062631 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 17:58:58.062640 | orchestrator | Friday 29 August 2025 17:50:05 +0000 (0:00:01.446) 0:03:36.188 ********* 2025-08-29 17:58:58.062649 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062667 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062676 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062684 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062693 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062702 | orchestrator | 2025-08-29 17:58:58.062711 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 17:58:58.062751 | orchestrator | Friday 29 August 2025 17:50:06 +0000 (0:00:01.077) 0:03:37.265 ********* 2025-08-29 17:58:58.062761 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062789 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062798 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062807 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062816 | orchestrator | 2025-08-29 17:58:58.062824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 17:58:58.062832 | orchestrator | Friday 29 August 2025 17:50:07 +0000 (0:00:01.162) 0:03:38.428 ********* 2025-08-29 17:58:58.062840 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062848 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062866 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062874 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062881 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062890 | orchestrator | 2025-08-29 17:58:58.062898 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 17:58:58.062906 | orchestrator | Friday 29 August 2025 17:50:08 +0000 (0:00:00.915) 0:03:39.343 ********* 2025-08-29 17:58:58.062914 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.062922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.062929 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.062938 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.062946 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.062954 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.062970 | orchestrator | 2025-08-29 17:58:58.062979 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 17:58:58.062987 | orchestrator | Friday 29 August 2025 17:50:09 +0000 (0:00:01.124) 0:03:40.468 ********* 2025-08-29 17:58:58.062996 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.063004 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.063012 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.063020 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.063029 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.063036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.063043 | orchestrator | 2025-08-29 17:58:58.063050 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 17:58:58.063057 | orchestrator | Friday 29 August 2025 17:50:10 +0000 (0:00:00.940) 0:03:41.408 ********* 2025-08-29 17:58:58.063064 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.063071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.063078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.063085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.063092 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.063099 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.063107 | orchestrator | 2025-08-29 17:58:58.063114 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 17:58:58.063121 | orchestrator | Friday 29 August 2025 17:50:12 +0000 (0:00:01.204) 0:03:42.613 ********* 2025-08-29 17:58:58.063129 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.063137 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.063146 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.063154 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.063162 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.063169 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.063177 | orchestrator | 2025-08-29 17:58:58.063185 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 17:58:58.063193 | orchestrator | Friday 29 August 2025 17:50:13 +0000 (0:00:01.654) 0:03:44.267 ********* 2025-08-29 17:58:58.063202 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.063210 | orchestrator | 2025-08-29 17:58:58.063218 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 17:58:58.063226 | orchestrator | Friday 29 August 2025 17:50:15 +0000 (0:00:01.740) 0:03:46.007 ********* 2025-08-29 17:58:58.063234 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 17:58:58.063241 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 17:58:58.063250 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 17:58:58.063258 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 17:58:58.063359 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 17:58:58.063381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 17:58:58.063386 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 17:58:58.063391 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 17:58:58.063401 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 17:58:58.063406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 17:58:58.063411 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 17:58:58.063423 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 17:58:58.063428 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 17:58:58.063433 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 17:58:58.063437 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 17:58:58.063442 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 17:58:58.063447 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 17:58:58.063458 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 17:58:58.063462 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 17:58:58.063467 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 17:58:58.063472 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 17:58:58.063516 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 17:58:58.063522 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 17:58:58.063527 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 17:58:58.063531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 17:58:58.063536 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 17:58:58.063541 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 17:58:58.063545 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 17:58:58.063550 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 17:58:58.063555 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 17:58:58.063560 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 17:58:58.063565 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 17:58:58.063570 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 17:58:58.063574 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 17:58:58.063579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:58:58.063584 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 17:58:58.063589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 17:58:58.063593 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 17:58:58.063598 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 17:58:58.063603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 17:58:58.063607 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 17:58:58.063612 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 17:58:58.063617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:58:58.063622 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:58:58.063626 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 17:58:58.063631 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:58:58.063636 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:58:58.063640 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:58:58.063645 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:58:58.063650 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 17:58:58.063654 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:58:58.063659 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:58:58.063664 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:58:58.063668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:58:58.063673 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:58:58.063677 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 17:58:58.063682 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:58:58.063687 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:58:58.063691 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:58:58.063701 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:58:58.063705 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:58:58.063710 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 17:58:58.063714 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:58:58.063719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:58:58.063724 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:58:58.063729 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:58:58.063733 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:58:58.063741 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:58:58.063746 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 17:58:58.063751 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:58:58.063755 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:58:58.063760 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:58:58.063765 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:58:58.063769 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:58:58.063774 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 17:58:58.063779 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:58:58.063783 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 17:58:58.063788 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:58:58.063793 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:58:58.063813 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:58:58.063818 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 17:58:58.063823 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:58:58.063828 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 17:58:58.063833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:58:58.063837 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:58:58.063842 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 17:58:58.063847 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 17:58:58.063851 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 17:58:58.063856 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 17:58:58.063861 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 17:58:58.063866 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 17:58:58.063870 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 17:58:58.063875 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 17:58:58.063880 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 17:58:58.063884 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 17:58:58.063889 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 17:58:58.063894 | orchestrator | 2025-08-29 17:58:58.063899 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 17:58:58.063903 | orchestrator | Friday 29 August 2025 17:50:22 +0000 (0:00:07.064) 0:03:53.072 ********* 2025-08-29 17:58:58.063908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.063916 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.063921 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.063926 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.063931 | orchestrator | 2025-08-29 17:58:58.063936 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 17:58:58.063940 | orchestrator | Friday 29 August 2025 17:50:24 +0000 (0:00:01.490) 0:03:54.563 ********* 2025-08-29 17:58:58.063945 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.063950 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.063955 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.063960 | orchestrator | 2025-08-29 17:58:58.063964 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 17:58:58.063969 | orchestrator | Friday 29 August 2025 17:50:25 +0000 (0:00:01.013) 0:03:55.577 ********* 2025-08-29 17:58:58.063974 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.063979 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.063984 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.063988 | orchestrator | 2025-08-29 17:58:58.063993 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 17:58:58.063998 | orchestrator | Friday 29 August 2025 17:50:27 +0000 (0:00:02.052) 0:03:57.630 ********* 2025-08-29 17:58:58.064003 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064007 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064012 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064017 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.064022 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.064026 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.064031 | orchestrator | 2025-08-29 17:58:58.064036 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 17:58:58.064046 | orchestrator | Friday 29 August 2025 17:50:28 +0000 (0:00:01.306) 0:03:58.936 ********* 2025-08-29 17:58:58.064051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064055 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064060 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064065 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.064070 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.064074 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.064079 | orchestrator | 2025-08-29 17:58:58.064084 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 17:58:58.064088 | orchestrator | Friday 29 August 2025 17:50:29 +0000 (0:00:01.371) 0:04:00.308 ********* 2025-08-29 17:58:58.064093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064098 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064102 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064107 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064116 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064121 | orchestrator | 2025-08-29 17:58:58.064126 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 17:58:58.064131 | orchestrator | Friday 29 August 2025 17:50:31 +0000 (0:00:01.216) 0:04:01.525 ********* 2025-08-29 17:58:58.064135 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064140 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064162 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064168 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064173 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064177 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064182 | orchestrator | 2025-08-29 17:58:58.064187 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 17:58:58.064192 | orchestrator | Friday 29 August 2025 17:50:31 +0000 (0:00:00.631) 0:04:02.156 ********* 2025-08-29 17:58:58.064196 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064201 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064206 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064215 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064220 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064224 | orchestrator | 2025-08-29 17:58:58.064229 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 17:58:58.064234 | orchestrator | Friday 29 August 2025 17:50:32 +0000 (0:00:01.107) 0:04:03.263 ********* 2025-08-29 17:58:58.064239 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064243 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064253 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064257 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064262 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064284 | orchestrator | 2025-08-29 17:58:58.064289 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 17:58:58.064294 | orchestrator | Friday 29 August 2025 17:50:33 +0000 (0:00:00.892) 0:04:04.156 ********* 2025-08-29 17:58:58.064299 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064303 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064308 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064313 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064317 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064322 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064327 | orchestrator | 2025-08-29 17:58:58.064331 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 17:58:58.064336 | orchestrator | Friday 29 August 2025 17:50:34 +0000 (0:00:00.745) 0:04:04.902 ********* 2025-08-29 17:58:58.064341 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064345 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064350 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064354 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064359 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064364 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064368 | orchestrator | 2025-08-29 17:58:58.064373 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 17:58:58.064378 | orchestrator | Friday 29 August 2025 17:50:35 +0000 (0:00:01.373) 0:04:06.275 ********* 2025-08-29 17:58:58.064382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064387 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064392 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064396 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.064401 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.064405 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.064410 | orchestrator | 2025-08-29 17:58:58.064415 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 17:58:58.064419 | orchestrator | Friday 29 August 2025 17:50:39 +0000 (0:00:03.707) 0:04:09.983 ********* 2025-08-29 17:58:58.064424 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064429 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064442 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.064446 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.064451 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.064456 | orchestrator | 2025-08-29 17:58:58.064461 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 17:58:58.064465 | orchestrator | Friday 29 August 2025 17:50:40 +0000 (0:00:00.787) 0:04:10.771 ********* 2025-08-29 17:58:58.064470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064479 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064484 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.064489 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.064493 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.064498 | orchestrator | 2025-08-29 17:58:58.064503 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 17:58:58.064508 | orchestrator | Friday 29 August 2025 17:50:41 +0000 (0:00:01.268) 0:04:12.039 ********* 2025-08-29 17:58:58.064512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064529 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064534 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064539 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064543 | orchestrator | 2025-08-29 17:58:58.064548 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 17:58:58.064553 | orchestrator | Friday 29 August 2025 17:50:43 +0000 (0:00:01.552) 0:04:13.591 ********* 2025-08-29 17:58:58.064558 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064562 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064567 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064572 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.064576 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.064581 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.064586 | orchestrator | 2025-08-29 17:58:58.064591 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 17:58:58.064610 | orchestrator | Friday 29 August 2025 17:50:44 +0000 (0:00:01.085) 0:04:14.676 ********* 2025-08-29 17:58:58.064616 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064620 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064625 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064631 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 17:58:58.064637 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 17:58:58.064643 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 17:58:58.064649 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 17:58:58.064657 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064662 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064667 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 17:58:58.064672 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 17:58:58.064677 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064682 | orchestrator | 2025-08-29 17:58:58.064687 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 17:58:58.064691 | orchestrator | Friday 29 August 2025 17:50:45 +0000 (0:00:01.100) 0:04:15.777 ********* 2025-08-29 17:58:58.064696 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064701 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064710 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064719 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064724 | orchestrator | 2025-08-29 17:58:58.064729 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 17:58:58.064733 | orchestrator | Friday 29 August 2025 17:50:46 +0000 (0:00:01.319) 0:04:17.097 ********* 2025-08-29 17:58:58.064738 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064743 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064748 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064752 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064757 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064762 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064766 | orchestrator | 2025-08-29 17:58:58.064771 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 17:58:58.064776 | orchestrator | Friday 29 August 2025 17:50:47 +0000 (0:00:00.604) 0:04:17.701 ********* 2025-08-29 17:58:58.064781 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064788 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064793 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064797 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064807 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064811 | orchestrator | 2025-08-29 17:58:58.064816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 17:58:58.064821 | orchestrator | Friday 29 August 2025 17:50:48 +0000 (0:00:00.775) 0:04:18.477 ********* 2025-08-29 17:58:58.064826 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064830 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064835 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064844 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064849 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064854 | orchestrator | 2025-08-29 17:58:58.064858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 17:58:58.064863 | orchestrator | Friday 29 August 2025 17:50:48 +0000 (0:00:00.619) 0:04:19.096 ********* 2025-08-29 17:58:58.064868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064900 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.064906 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.064910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.064915 | orchestrator | 2025-08-29 17:58:58.064920 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 17:58:58.064924 | orchestrator | Friday 29 August 2025 17:50:49 +0000 (0:00:01.018) 0:04:20.115 ********* 2025-08-29 17:58:58.064929 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064934 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.064938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.064943 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.064948 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.064952 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.064957 | orchestrator | 2025-08-29 17:58:58.064962 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 17:58:58.064966 | orchestrator | Friday 29 August 2025 17:50:50 +0000 (0:00:01.267) 0:04:21.383 ********* 2025-08-29 17:58:58.064971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 17:58:58.064976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 17:58:58.064981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 17:58:58.064985 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.064990 | orchestrator | 2025-08-29 17:58:58.064995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 17:58:58.064999 | orchestrator | Friday 29 August 2025 17:50:51 +0000 (0:00:01.083) 0:04:22.466 ********* 2025-08-29 17:58:58.065004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 17:58:58.065009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 17:58:58.065013 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 17:58:58.065018 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065023 | orchestrator | 2025-08-29 17:58:58.065028 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 17:58:58.065032 | orchestrator | Friday 29 August 2025 17:50:52 +0000 (0:00:00.776) 0:04:23.243 ********* 2025-08-29 17:58:58.065037 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 17:58:58.065042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 17:58:58.065047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 17:58:58.065051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065056 | orchestrator | 2025-08-29 17:58:58.065061 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 17:58:58.065065 | orchestrator | Friday 29 August 2025 17:50:53 +0000 (0:00:01.046) 0:04:24.290 ********* 2025-08-29 17:58:58.065070 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065075 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.065079 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.065084 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.065089 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.065093 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.065098 | orchestrator | 2025-08-29 17:58:58.065103 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 17:58:58.065108 | orchestrator | Friday 29 August 2025 17:50:54 +0000 (0:00:00.875) 0:04:25.165 ********* 2025-08-29 17:58:58.065112 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 17:58:58.065117 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065122 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 17:58:58.065126 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.065131 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 17:58:58.065136 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 17:58:58.065140 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.065145 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 17:58:58.065155 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 17:58:58.065159 | orchestrator | 2025-08-29 17:58:58.065164 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 17:58:58.065169 | orchestrator | Friday 29 August 2025 17:50:58 +0000 (0:00:03.383) 0:04:28.548 ********* 2025-08-29 17:58:58.065174 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.065179 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.065183 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.065188 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.065192 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.065197 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.065202 | orchestrator | 2025-08-29 17:58:58.065207 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:58:58.065211 | orchestrator | Friday 29 August 2025 17:51:02 +0000 (0:00:04.007) 0:04:32.556 ********* 2025-08-29 17:58:58.065216 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.065221 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.065229 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.065233 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.065238 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.065243 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.065247 | orchestrator | 2025-08-29 17:58:58.065252 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 17:58:58.065257 | orchestrator | Friday 29 August 2025 17:51:03 +0000 (0:00:01.170) 0:04:33.726 ********* 2025-08-29 17:58:58.065261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065281 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.065286 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.065290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.065295 | orchestrator | 2025-08-29 17:58:58.065300 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 17:58:58.065305 | orchestrator | Friday 29 August 2025 17:51:04 +0000 (0:00:01.169) 0:04:34.896 ********* 2025-08-29 17:58:58.065310 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.065314 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.065319 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.065324 | orchestrator | 2025-08-29 17:58:58.065328 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 17:58:58.065348 | orchestrator | Friday 29 August 2025 17:51:04 +0000 (0:00:00.388) 0:04:35.285 ********* 2025-08-29 17:58:58.065354 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.065358 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.065363 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.065368 | orchestrator | 2025-08-29 17:58:58.065372 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 17:58:58.065377 | orchestrator | Friday 29 August 2025 17:51:06 +0000 (0:00:01.450) 0:04:36.735 ********* 2025-08-29 17:58:58.065382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:58:58.065387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:58:58.065391 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:58:58.065396 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065401 | orchestrator | 2025-08-29 17:58:58.065406 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 17:58:58.065410 | orchestrator | Friday 29 August 2025 17:51:07 +0000 (0:00:01.015) 0:04:37.750 ********* 2025-08-29 17:58:58.065415 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.065420 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.065424 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.065429 | orchestrator | 2025-08-29 17:58:58.065434 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 17:58:58.065439 | orchestrator | Friday 29 August 2025 17:51:07 +0000 (0:00:00.684) 0:04:38.435 ********* 2025-08-29 17:58:58.065448 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065452 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.065457 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.065462 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.065467 | orchestrator | 2025-08-29 17:58:58.065471 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 17:58:58.065476 | orchestrator | Friday 29 August 2025 17:51:08 +0000 (0:00:00.911) 0:04:39.346 ********* 2025-08-29 17:58:58.065481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.065486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.065490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.065495 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065500 | orchestrator | 2025-08-29 17:58:58.065505 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 17:58:58.065509 | orchestrator | Friday 29 August 2025 17:51:09 +0000 (0:00:00.697) 0:04:40.044 ********* 2025-08-29 17:58:58.065514 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065519 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.065523 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.065528 | orchestrator | 2025-08-29 17:58:58.065533 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 17:58:58.065537 | orchestrator | Friday 29 August 2025 17:51:10 +0000 (0:00:00.565) 0:04:40.609 ********* 2025-08-29 17:58:58.065542 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065547 | orchestrator | 2025-08-29 17:58:58.065551 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 17:58:58.065556 | orchestrator | Friday 29 August 2025 17:51:10 +0000 (0:00:00.245) 0:04:40.855 ********* 2025-08-29 17:58:58.065561 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065566 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.065570 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.065575 | orchestrator | 2025-08-29 17:58:58.065580 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 17:58:58.065584 | orchestrator | Friday 29 August 2025 17:51:10 +0000 (0:00:00.356) 0:04:41.211 ********* 2025-08-29 17:58:58.065589 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065594 | orchestrator | 2025-08-29 17:58:58.065598 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 17:58:58.065603 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.270) 0:04:41.482 ********* 2025-08-29 17:58:58.065608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065613 | orchestrator | 2025-08-29 17:58:58.065617 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 17:58:58.065622 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.243) 0:04:41.725 ********* 2025-08-29 17:58:58.065627 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065632 | orchestrator | 2025-08-29 17:58:58.065636 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 17:58:58.065641 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.155) 0:04:41.881 ********* 2025-08-29 17:58:58.065646 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065650 | orchestrator | 2025-08-29 17:58:58.065657 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 17:58:58.065662 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.252) 0:04:42.133 ********* 2025-08-29 17:58:58.065667 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065672 | orchestrator | 2025-08-29 17:58:58.065676 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 17:58:58.065681 | orchestrator | Friday 29 August 2025 17:51:11 +0000 (0:00:00.218) 0:04:42.352 ********* 2025-08-29 17:58:58.065690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.065695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.065700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.065705 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065709 | orchestrator | 2025-08-29 17:58:58.065714 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 17:58:58.065719 | orchestrator | Friday 29 August 2025 17:51:12 +0000 (0:00:00.733) 0:04:43.085 ********* 2025-08-29 17:58:58.065723 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065728 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.065733 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.065738 | orchestrator | 2025-08-29 17:58:58.065756 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 17:58:58.065762 | orchestrator | Friday 29 August 2025 17:51:13 +0000 (0:00:00.595) 0:04:43.680 ********* 2025-08-29 17:58:58.065766 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065771 | orchestrator | 2025-08-29 17:58:58.065776 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 17:58:58.065780 | orchestrator | Friday 29 August 2025 17:51:13 +0000 (0:00:00.232) 0:04:43.913 ********* 2025-08-29 17:58:58.065785 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065790 | orchestrator | 2025-08-29 17:58:58.065794 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 17:58:58.065799 | orchestrator | Friday 29 August 2025 17:51:13 +0000 (0:00:00.203) 0:04:44.116 ********* 2025-08-29 17:58:58.065804 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065809 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.065813 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.065818 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.065823 | orchestrator | 2025-08-29 17:58:58.065828 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 17:58:58.065832 | orchestrator | Friday 29 August 2025 17:51:14 +0000 (0:00:00.937) 0:04:45.054 ********* 2025-08-29 17:58:58.065837 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.065842 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.065846 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.065851 | orchestrator | 2025-08-29 17:58:58.065856 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 17:58:58.065861 | orchestrator | Friday 29 August 2025 17:51:14 +0000 (0:00:00.308) 0:04:45.362 ********* 2025-08-29 17:58:58.065865 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.065870 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.065875 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.065879 | orchestrator | 2025-08-29 17:58:58.065884 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 17:58:58.065889 | orchestrator | Friday 29 August 2025 17:51:16 +0000 (0:00:01.189) 0:04:46.552 ********* 2025-08-29 17:58:58.065893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.065898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.065903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.065908 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.065913 | orchestrator | 2025-08-29 17:58:58.065917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 17:58:58.065922 | orchestrator | Friday 29 August 2025 17:51:16 +0000 (0:00:00.759) 0:04:47.311 ********* 2025-08-29 17:58:58.065927 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.065931 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.065936 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.065941 | orchestrator | 2025-08-29 17:58:58.065946 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 17:58:58.065950 | orchestrator | Friday 29 August 2025 17:51:17 +0000 (0:00:00.339) 0:04:47.651 ********* 2025-08-29 17:58:58.065958 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.065963 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.065968 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.065973 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.065977 | orchestrator | 2025-08-29 17:58:58.065982 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 17:58:58.065987 | orchestrator | Friday 29 August 2025 17:51:18 +0000 (0:00:00.905) 0:04:48.556 ********* 2025-08-29 17:58:58.065992 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.065996 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.066001 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.066006 | orchestrator | 2025-08-29 17:58:58.066010 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 17:58:58.066036 | orchestrator | Friday 29 August 2025 17:51:18 +0000 (0:00:00.281) 0:04:48.838 ********* 2025-08-29 17:58:58.066041 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.066046 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.066051 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.066055 | orchestrator | 2025-08-29 17:58:58.066060 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 17:58:58.066065 | orchestrator | Friday 29 August 2025 17:51:19 +0000 (0:00:01.347) 0:04:50.186 ********* 2025-08-29 17:58:58.066070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.066074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.066082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.066087 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.066092 | orchestrator | 2025-08-29 17:58:58.066097 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 17:58:58.066101 | orchestrator | Friday 29 August 2025 17:51:20 +0000 (0:00:00.637) 0:04:50.823 ********* 2025-08-29 17:58:58.066106 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.066111 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.066116 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.066120 | orchestrator | 2025-08-29 17:58:58.066125 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 17:58:58.066130 | orchestrator | Friday 29 August 2025 17:51:20 +0000 (0:00:00.332) 0:04:51.156 ********* 2025-08-29 17:58:58.066134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066139 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066144 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066149 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.066153 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.066158 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.066163 | orchestrator | 2025-08-29 17:58:58.066168 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 17:58:58.066172 | orchestrator | Friday 29 August 2025 17:51:21 +0000 (0:00:00.753) 0:04:51.910 ********* 2025-08-29 17:58:58.066193 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.066198 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.066203 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.066208 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.066213 | orchestrator | 2025-08-29 17:58:58.066217 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 17:58:58.066222 | orchestrator | Friday 29 August 2025 17:51:22 +0000 (0:00:00.755) 0:04:52.666 ********* 2025-08-29 17:58:58.066227 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066232 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066236 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066241 | orchestrator | 2025-08-29 17:58:58.066246 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 17:58:58.066254 | orchestrator | Friday 29 August 2025 17:51:22 +0000 (0:00:00.416) 0:04:53.083 ********* 2025-08-29 17:58:58.066259 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.066264 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.066288 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.066293 | orchestrator | 2025-08-29 17:58:58.066298 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 17:58:58.066302 | orchestrator | Friday 29 August 2025 17:51:23 +0000 (0:00:01.308) 0:04:54.392 ********* 2025-08-29 17:58:58.066307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:58:58.066312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:58:58.066317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:58:58.066321 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066326 | orchestrator | 2025-08-29 17:58:58.066331 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 17:58:58.066336 | orchestrator | Friday 29 August 2025 17:51:24 +0000 (0:00:00.595) 0:04:54.987 ********* 2025-08-29 17:58:58.066340 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066345 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066350 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066355 | orchestrator | 2025-08-29 17:58:58.066359 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 17:58:58.066364 | orchestrator | 2025-08-29 17:58:58.066369 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.066374 | orchestrator | Friday 29 August 2025 17:51:25 +0000 (0:00:00.608) 0:04:55.595 ********* 2025-08-29 17:58:58.066378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.066383 | orchestrator | 2025-08-29 17:58:58.066388 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.066393 | orchestrator | Friday 29 August 2025 17:51:25 +0000 (0:00:00.681) 0:04:56.277 ********* 2025-08-29 17:58:58.066398 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.066402 | orchestrator | 2025-08-29 17:58:58.066407 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.066412 | orchestrator | Friday 29 August 2025 17:51:26 +0000 (0:00:00.543) 0:04:56.820 ********* 2025-08-29 17:58:58.066416 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066421 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066426 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066431 | orchestrator | 2025-08-29 17:58:58.066435 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.066440 | orchestrator | Friday 29 August 2025 17:51:27 +0000 (0:00:01.063) 0:04:57.883 ********* 2025-08-29 17:58:58.066445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066450 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066459 | orchestrator | 2025-08-29 17:58:58.066464 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.066468 | orchestrator | Friday 29 August 2025 17:51:27 +0000 (0:00:00.346) 0:04:58.229 ********* 2025-08-29 17:58:58.066473 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066478 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066483 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066487 | orchestrator | 2025-08-29 17:58:58.066492 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.066497 | orchestrator | Friday 29 August 2025 17:51:28 +0000 (0:00:00.327) 0:04:58.557 ********* 2025-08-29 17:58:58.066501 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066506 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066517 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066522 | orchestrator | 2025-08-29 17:58:58.066530 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.066534 | orchestrator | Friday 29 August 2025 17:51:28 +0000 (0:00:00.349) 0:04:58.906 ********* 2025-08-29 17:58:58.066539 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066544 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066549 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066553 | orchestrator | 2025-08-29 17:58:58.066558 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.066563 | orchestrator | Friday 29 August 2025 17:51:29 +0000 (0:00:00.975) 0:04:59.882 ********* 2025-08-29 17:58:58.066568 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066573 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066577 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066582 | orchestrator | 2025-08-29 17:58:58.066587 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.066591 | orchestrator | Friday 29 August 2025 17:51:29 +0000 (0:00:00.358) 0:05:00.240 ********* 2025-08-29 17:58:58.066596 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066601 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066606 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066610 | orchestrator | 2025-08-29 17:58:58.066615 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.066636 | orchestrator | Friday 29 August 2025 17:51:30 +0000 (0:00:00.307) 0:05:00.547 ********* 2025-08-29 17:58:58.066641 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066646 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066651 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066655 | orchestrator | 2025-08-29 17:58:58.066660 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.066665 | orchestrator | Friday 29 August 2025 17:51:30 +0000 (0:00:00.748) 0:05:01.296 ********* 2025-08-29 17:58:58.066670 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066674 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066679 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066684 | orchestrator | 2025-08-29 17:58:58.066688 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.066693 | orchestrator | Friday 29 August 2025 17:51:31 +0000 (0:00:01.035) 0:05:02.331 ********* 2025-08-29 17:58:58.066698 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066703 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066707 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066712 | orchestrator | 2025-08-29 17:58:58.066717 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.066721 | orchestrator | Friday 29 August 2025 17:51:32 +0000 (0:00:00.348) 0:05:02.680 ********* 2025-08-29 17:58:58.066726 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066731 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066735 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066740 | orchestrator | 2025-08-29 17:58:58.066745 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.066750 | orchestrator | Friday 29 August 2025 17:51:32 +0000 (0:00:00.339) 0:05:03.020 ********* 2025-08-29 17:58:58.066754 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066759 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066764 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066768 | orchestrator | 2025-08-29 17:58:58.066773 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.066778 | orchestrator | Friday 29 August 2025 17:51:32 +0000 (0:00:00.357) 0:05:03.378 ********* 2025-08-29 17:58:58.066783 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066787 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066797 | orchestrator | 2025-08-29 17:58:58.066806 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.066810 | orchestrator | Friday 29 August 2025 17:51:33 +0000 (0:00:00.648) 0:05:04.026 ********* 2025-08-29 17:58:58.066815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066820 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066825 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066829 | orchestrator | 2025-08-29 17:58:58.066834 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.066839 | orchestrator | Friday 29 August 2025 17:51:33 +0000 (0:00:00.434) 0:05:04.461 ********* 2025-08-29 17:58:58.066843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066848 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066853 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066857 | orchestrator | 2025-08-29 17:58:58.066862 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.066867 | orchestrator | Friday 29 August 2025 17:51:34 +0000 (0:00:00.552) 0:05:05.013 ********* 2025-08-29 17:58:58.066872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.066876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.066881 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.066886 | orchestrator | 2025-08-29 17:58:58.066890 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.066895 | orchestrator | Friday 29 August 2025 17:51:35 +0000 (0:00:00.672) 0:05:05.686 ********* 2025-08-29 17:58:58.066900 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066905 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066909 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066914 | orchestrator | 2025-08-29 17:58:58.066919 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.066924 | orchestrator | Friday 29 August 2025 17:51:35 +0000 (0:00:00.754) 0:05:06.440 ********* 2025-08-29 17:58:58.066928 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066933 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066938 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066942 | orchestrator | 2025-08-29 17:58:58.066947 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.066952 | orchestrator | Friday 29 August 2025 17:51:36 +0000 (0:00:00.521) 0:05:06.962 ********* 2025-08-29 17:58:58.066956 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066961 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066966 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.066970 | orchestrator | 2025-08-29 17:58:58.066975 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 17:58:58.066983 | orchestrator | Friday 29 August 2025 17:51:37 +0000 (0:00:00.701) 0:05:07.663 ********* 2025-08-29 17:58:58.066988 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.066992 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.066997 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067002 | orchestrator | 2025-08-29 17:58:58.067006 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 17:58:58.067011 | orchestrator | Friday 29 August 2025 17:51:37 +0000 (0:00:00.461) 0:05:08.125 ********* 2025-08-29 17:58:58.067016 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.067021 | orchestrator | 2025-08-29 17:58:58.067026 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 17:58:58.067030 | orchestrator | Friday 29 August 2025 17:51:38 +0000 (0:00:00.997) 0:05:09.122 ********* 2025-08-29 17:58:58.067035 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.067040 | orchestrator | 2025-08-29 17:58:58.067044 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 17:58:58.067049 | orchestrator | Friday 29 August 2025 17:51:38 +0000 (0:00:00.179) 0:05:09.302 ********* 2025-08-29 17:58:58.067054 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 17:58:58.067062 | orchestrator | 2025-08-29 17:58:58.067082 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 17:58:58.067088 | orchestrator | Friday 29 August 2025 17:51:39 +0000 (0:00:01.125) 0:05:10.427 ********* 2025-08-29 17:58:58.067092 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067097 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.067102 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067106 | orchestrator | 2025-08-29 17:58:58.067111 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 17:58:58.067116 | orchestrator | Friday 29 August 2025 17:51:40 +0000 (0:00:00.367) 0:05:10.795 ********* 2025-08-29 17:58:58.067121 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067125 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.067130 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067135 | orchestrator | 2025-08-29 17:58:58.067139 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 17:58:58.067144 | orchestrator | Friday 29 August 2025 17:51:41 +0000 (0:00:00.710) 0:05:11.506 ********* 2025-08-29 17:58:58.067149 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067154 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067158 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067163 | orchestrator | 2025-08-29 17:58:58.067168 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 17:58:58.067172 | orchestrator | Friday 29 August 2025 17:51:42 +0000 (0:00:01.266) 0:05:12.772 ********* 2025-08-29 17:58:58.067177 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067182 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067186 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067191 | orchestrator | 2025-08-29 17:58:58.067196 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 17:58:58.067201 | orchestrator | Friday 29 August 2025 17:51:43 +0000 (0:00:00.843) 0:05:13.616 ********* 2025-08-29 17:58:58.067206 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067210 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067215 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067220 | orchestrator | 2025-08-29 17:58:58.067224 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 17:58:58.067229 | orchestrator | Friday 29 August 2025 17:51:43 +0000 (0:00:00.678) 0:05:14.294 ********* 2025-08-29 17:58:58.067234 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067239 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.067243 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067248 | orchestrator | 2025-08-29 17:58:58.067253 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 17:58:58.067258 | orchestrator | Friday 29 August 2025 17:51:44 +0000 (0:00:01.064) 0:05:15.359 ********* 2025-08-29 17:58:58.067262 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067299 | orchestrator | 2025-08-29 17:58:58.067304 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 17:58:58.067309 | orchestrator | Friday 29 August 2025 17:51:46 +0000 (0:00:01.239) 0:05:16.599 ********* 2025-08-29 17:58:58.067314 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067319 | orchestrator | 2025-08-29 17:58:58.067323 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 17:58:58.067328 | orchestrator | Friday 29 August 2025 17:51:46 +0000 (0:00:00.616) 0:05:17.215 ********* 2025-08-29 17:58:58.067333 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:58:58.067338 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.067342 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.067347 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:58:58.067352 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 17:58:58.067357 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 17:58:58.067366 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:58:58.067371 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 17:58:58.067376 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 17:58:58.067380 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 17:58:58.067385 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 17:58:58.067390 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 17:58:58.067395 | orchestrator | 2025-08-29 17:58:58.067399 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 17:58:58.067404 | orchestrator | Friday 29 August 2025 17:51:50 +0000 (0:00:03.321) 0:05:20.537 ********* 2025-08-29 17:58:58.067409 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067414 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067418 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067423 | orchestrator | 2025-08-29 17:58:58.067431 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 17:58:58.067436 | orchestrator | Friday 29 August 2025 17:51:51 +0000 (0:00:01.547) 0:05:22.084 ********* 2025-08-29 17:58:58.067440 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067445 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.067450 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067455 | orchestrator | 2025-08-29 17:58:58.067459 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 17:58:58.067464 | orchestrator | Friday 29 August 2025 17:51:52 +0000 (0:00:00.661) 0:05:22.745 ********* 2025-08-29 17:58:58.067469 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067474 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.067478 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067483 | orchestrator | 2025-08-29 17:58:58.067488 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 17:58:58.067492 | orchestrator | Friday 29 August 2025 17:51:52 +0000 (0:00:00.335) 0:05:23.081 ********* 2025-08-29 17:58:58.067497 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067502 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067507 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067511 | orchestrator | 2025-08-29 17:58:58.067516 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 17:58:58.067537 | orchestrator | Friday 29 August 2025 17:51:54 +0000 (0:00:01.592) 0:05:24.674 ********* 2025-08-29 17:58:58.067543 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067548 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067553 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067558 | orchestrator | 2025-08-29 17:58:58.067562 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 17:58:58.067567 | orchestrator | Friday 29 August 2025 17:51:55 +0000 (0:00:01.350) 0:05:26.024 ********* 2025-08-29 17:58:58.067572 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.067576 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.067581 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.067586 | orchestrator | 2025-08-29 17:58:58.067590 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 17:58:58.067595 | orchestrator | Friday 29 August 2025 17:51:55 +0000 (0:00:00.336) 0:05:26.361 ********* 2025-08-29 17:58:58.067600 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.067605 | orchestrator | 2025-08-29 17:58:58.067609 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 17:58:58.067614 | orchestrator | Friday 29 August 2025 17:51:56 +0000 (0:00:01.049) 0:05:27.410 ********* 2025-08-29 17:58:58.067619 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.067623 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.067628 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.067633 | orchestrator | 2025-08-29 17:58:58.067642 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 17:58:58.067646 | orchestrator | Friday 29 August 2025 17:51:57 +0000 (0:00:00.397) 0:05:27.807 ********* 2025-08-29 17:58:58.067651 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.067656 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.067661 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.067665 | orchestrator | 2025-08-29 17:58:58.067670 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 17:58:58.067675 | orchestrator | Friday 29 August 2025 17:51:58 +0000 (0:00:00.754) 0:05:28.561 ********* 2025-08-29 17:58:58.067680 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.067684 | orchestrator | 2025-08-29 17:58:58.067689 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 17:58:58.067694 | orchestrator | Friday 29 August 2025 17:51:59 +0000 (0:00:00.978) 0:05:29.540 ********* 2025-08-29 17:58:58.067698 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067703 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067708 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067713 | orchestrator | 2025-08-29 17:58:58.067717 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 17:58:58.067721 | orchestrator | Friday 29 August 2025 17:52:00 +0000 (0:00:01.738) 0:05:31.278 ********* 2025-08-29 17:58:58.067726 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067730 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067735 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067739 | orchestrator | 2025-08-29 17:58:58.067744 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 17:58:58.067748 | orchestrator | Friday 29 August 2025 17:52:01 +0000 (0:00:01.173) 0:05:32.451 ********* 2025-08-29 17:58:58.067753 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067757 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067761 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067766 | orchestrator | 2025-08-29 17:58:58.067770 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 17:58:58.067775 | orchestrator | Friday 29 August 2025 17:52:04 +0000 (0:00:02.244) 0:05:34.696 ********* 2025-08-29 17:58:58.067779 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.067784 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.067788 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.067792 | orchestrator | 2025-08-29 17:58:58.067797 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 17:58:58.067801 | orchestrator | Friday 29 August 2025 17:52:07 +0000 (0:00:03.028) 0:05:37.724 ********* 2025-08-29 17:58:58.067806 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.067810 | orchestrator | 2025-08-29 17:58:58.067815 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 17:58:58.067819 | orchestrator | Friday 29 August 2025 17:52:07 +0000 (0:00:00.600) 0:05:38.325 ********* 2025-08-29 17:58:58.067824 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-08-29 17:58:58.067831 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067836 | orchestrator | 2025-08-29 17:58:58.067840 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 17:58:58.067845 | orchestrator | Friday 29 August 2025 17:52:30 +0000 (0:00:22.209) 0:06:00.535 ********* 2025-08-29 17:58:58.067849 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.067854 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.067858 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.067863 | orchestrator | 2025-08-29 17:58:58.067867 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 17:58:58.067872 | orchestrator | Friday 29 August 2025 17:52:40 +0000 (0:00:10.902) 0:06:11.437 ********* 2025-08-29 17:58:58.067882 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.067886 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.067891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.067895 | orchestrator | 2025-08-29 17:58:58.067900 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 17:58:58.067904 | orchestrator | Friday 29 August 2025 17:52:41 +0000 (0:00:00.367) 0:06:11.804 ********* 2025-08-29 17:58:58.067925 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 17:58:58.067933 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 17:58:58.067939 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 17:58:58.067945 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 17:58:58.067950 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 17:58:58.067955 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__cc92f2718a37af0eb0e7d4edec24f28ee3e62517'}])  2025-08-29 17:58:58.067961 | orchestrator | 2025-08-29 17:58:58.067965 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:58:58.067970 | orchestrator | Friday 29 August 2025 17:52:55 +0000 (0:00:14.032) 0:06:25.837 ********* 2025-08-29 17:58:58.067974 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.067979 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.067983 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.067988 | orchestrator | 2025-08-29 17:58:58.067992 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 17:58:58.067997 | orchestrator | Friday 29 August 2025 17:52:55 +0000 (0:00:00.534) 0:06:26.371 ********* 2025-08-29 17:58:58.068001 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.068006 | orchestrator | 2025-08-29 17:58:58.068010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 17:58:58.068015 | orchestrator | Friday 29 August 2025 17:52:56 +0000 (0:00:00.685) 0:06:27.057 ********* 2025-08-29 17:58:58.068023 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068027 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068032 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068036 | orchestrator | 2025-08-29 17:58:58.068041 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 17:58:58.068045 | orchestrator | Friday 29 August 2025 17:52:57 +0000 (0:00:00.628) 0:06:27.685 ********* 2025-08-29 17:58:58.068050 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068057 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068061 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068066 | orchestrator | 2025-08-29 17:58:58.068070 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 17:58:58.068075 | orchestrator | Friday 29 August 2025 17:52:57 +0000 (0:00:00.400) 0:06:28.085 ********* 2025-08-29 17:58:58.068079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:58:58.068084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:58:58.068088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:58:58.068093 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068097 | orchestrator | 2025-08-29 17:58:58.068102 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 17:58:58.068106 | orchestrator | Friday 29 August 2025 17:52:58 +0000 (0:00:00.692) 0:06:28.778 ********* 2025-08-29 17:58:58.068111 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068115 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068120 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068124 | orchestrator | 2025-08-29 17:58:58.068129 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 17:58:58.068133 | orchestrator | 2025-08-29 17:58:58.068137 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.068156 | orchestrator | Friday 29 August 2025 17:52:58 +0000 (0:00:00.636) 0:06:29.414 ********* 2025-08-29 17:58:58.068161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.068166 | orchestrator | 2025-08-29 17:58:58.068170 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.068175 | orchestrator | Friday 29 August 2025 17:52:59 +0000 (0:00:00.845) 0:06:30.260 ********* 2025-08-29 17:58:58.068179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.068184 | orchestrator | 2025-08-29 17:58:58.068188 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.068192 | orchestrator | Friday 29 August 2025 17:53:00 +0000 (0:00:00.559) 0:06:30.820 ********* 2025-08-29 17:58:58.068197 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068201 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068206 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068210 | orchestrator | 2025-08-29 17:58:58.068214 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.068219 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:01.016) 0:06:31.836 ********* 2025-08-29 17:58:58.068223 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068228 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068232 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068237 | orchestrator | 2025-08-29 17:58:58.068241 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.068245 | orchestrator | Friday 29 August 2025 17:53:01 +0000 (0:00:00.361) 0:06:32.198 ********* 2025-08-29 17:58:58.068250 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068254 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068259 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068263 | orchestrator | 2025-08-29 17:58:58.068279 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.068287 | orchestrator | Friday 29 August 2025 17:53:02 +0000 (0:00:00.361) 0:06:32.560 ********* 2025-08-29 17:58:58.068291 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068300 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068305 | orchestrator | 2025-08-29 17:58:58.068309 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.068314 | orchestrator | Friday 29 August 2025 17:53:02 +0000 (0:00:00.379) 0:06:32.939 ********* 2025-08-29 17:58:58.068318 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068322 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068327 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068331 | orchestrator | 2025-08-29 17:58:58.068336 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.068340 | orchestrator | Friday 29 August 2025 17:53:03 +0000 (0:00:01.074) 0:06:34.014 ********* 2025-08-29 17:58:58.068345 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068349 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068358 | orchestrator | 2025-08-29 17:58:58.068362 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.068367 | orchestrator | Friday 29 August 2025 17:53:03 +0000 (0:00:00.452) 0:06:34.466 ********* 2025-08-29 17:58:58.068371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068375 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068380 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068384 | orchestrator | 2025-08-29 17:58:58.068389 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.068393 | orchestrator | Friday 29 August 2025 17:53:04 +0000 (0:00:00.326) 0:06:34.793 ********* 2025-08-29 17:58:58.068398 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068402 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068406 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068411 | orchestrator | 2025-08-29 17:58:58.068415 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.068420 | orchestrator | Friday 29 August 2025 17:53:05 +0000 (0:00:00.811) 0:06:35.604 ********* 2025-08-29 17:58:58.068424 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068429 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068433 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068437 | orchestrator | 2025-08-29 17:58:58.068442 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.068446 | orchestrator | Friday 29 August 2025 17:53:06 +0000 (0:00:01.196) 0:06:36.801 ********* 2025-08-29 17:58:58.068451 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068455 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068460 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068464 | orchestrator | 2025-08-29 17:58:58.068471 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.068475 | orchestrator | Friday 29 August 2025 17:53:06 +0000 (0:00:00.333) 0:06:37.134 ********* 2025-08-29 17:58:58.068480 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068484 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068489 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068493 | orchestrator | 2025-08-29 17:58:58.068498 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.068502 | orchestrator | Friday 29 August 2025 17:53:07 +0000 (0:00:00.374) 0:06:37.508 ********* 2025-08-29 17:58:58.068507 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068516 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068520 | orchestrator | 2025-08-29 17:58:58.068524 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.068529 | orchestrator | Friday 29 August 2025 17:53:07 +0000 (0:00:00.388) 0:06:37.897 ********* 2025-08-29 17:58:58.068537 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068541 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068546 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068550 | orchestrator | 2025-08-29 17:58:58.068555 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.068574 | orchestrator | Friday 29 August 2025 17:53:08 +0000 (0:00:00.647) 0:06:38.545 ********* 2025-08-29 17:58:58.068579 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068584 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068588 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068593 | orchestrator | 2025-08-29 17:58:58.068597 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.068602 | orchestrator | Friday 29 August 2025 17:53:08 +0000 (0:00:00.347) 0:06:38.892 ********* 2025-08-29 17:58:58.068606 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068611 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068615 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068620 | orchestrator | 2025-08-29 17:58:58.068624 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.068629 | orchestrator | Friday 29 August 2025 17:53:08 +0000 (0:00:00.339) 0:06:39.232 ********* 2025-08-29 17:58:58.068633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068638 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068642 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068646 | orchestrator | 2025-08-29 17:58:58.068651 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.068655 | orchestrator | Friday 29 August 2025 17:53:09 +0000 (0:00:00.324) 0:06:39.556 ********* 2025-08-29 17:58:58.068660 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068664 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068669 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068673 | orchestrator | 2025-08-29 17:58:58.068678 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.068682 | orchestrator | Friday 29 August 2025 17:53:09 +0000 (0:00:00.641) 0:06:40.197 ********* 2025-08-29 17:58:58.068686 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068691 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068695 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068700 | orchestrator | 2025-08-29 17:58:58.068704 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.068709 | orchestrator | Friday 29 August 2025 17:53:10 +0000 (0:00:00.387) 0:06:40.585 ********* 2025-08-29 17:58:58.068713 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068718 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068722 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068726 | orchestrator | 2025-08-29 17:58:58.068731 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 17:58:58.068735 | orchestrator | Friday 29 August 2025 17:53:10 +0000 (0:00:00.625) 0:06:41.210 ********* 2025-08-29 17:58:58.068740 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 17:58:58.068744 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:58:58.068749 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:58:58.068754 | orchestrator | 2025-08-29 17:58:58.068758 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 17:58:58.068762 | orchestrator | Friday 29 August 2025 17:53:11 +0000 (0:00:00.947) 0:06:42.157 ********* 2025-08-29 17:58:58.068767 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.068771 | orchestrator | 2025-08-29 17:58:58.068776 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 17:58:58.068780 | orchestrator | Friday 29 August 2025 17:53:12 +0000 (0:00:00.850) 0:06:43.008 ********* 2025-08-29 17:58:58.068790 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.068795 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.068799 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.068803 | orchestrator | 2025-08-29 17:58:58.068808 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 17:58:58.068812 | orchestrator | Friday 29 August 2025 17:53:13 +0000 (0:00:00.707) 0:06:43.716 ********* 2025-08-29 17:58:58.068817 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.068821 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.068826 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.068830 | orchestrator | 2025-08-29 17:58:58.068835 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 17:58:58.068839 | orchestrator | Friday 29 August 2025 17:53:13 +0000 (0:00:00.429) 0:06:44.146 ********* 2025-08-29 17:58:58.068843 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:58:58.068848 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:58:58.068853 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:58:58.068857 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 17:58:58.068861 | orchestrator | 2025-08-29 17:58:58.068869 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 17:58:58.068873 | orchestrator | Friday 29 August 2025 17:53:24 +0000 (0:00:10.732) 0:06:54.878 ********* 2025-08-29 17:58:58.068878 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.068882 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.068887 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.068891 | orchestrator | 2025-08-29 17:58:58.068896 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 17:58:58.068900 | orchestrator | Friday 29 August 2025 17:53:25 +0000 (0:00:00.670) 0:06:55.548 ********* 2025-08-29 17:58:58.068905 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 17:58:58.068909 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 17:58:58.068914 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 17:58:58.068918 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 17:58:58.068923 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.068927 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.068939 | orchestrator | 2025-08-29 17:58:58.068944 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:58:58.068948 | orchestrator | Friday 29 August 2025 17:53:27 +0000 (0:00:02.258) 0:06:57.807 ********* 2025-08-29 17:58:58.068968 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 17:58:58.068973 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 17:58:58.068978 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 17:58:58.068983 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 17:58:58.068987 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 17:58:58.068991 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 17:58:58.068996 | orchestrator | 2025-08-29 17:58:58.069000 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 17:58:58.069005 | orchestrator | Friday 29 August 2025 17:53:28 +0000 (0:00:01.369) 0:06:59.177 ********* 2025-08-29 17:58:58.069009 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.069014 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.069018 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.069022 | orchestrator | 2025-08-29 17:58:58.069027 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 17:58:58.069031 | orchestrator | Friday 29 August 2025 17:53:29 +0000 (0:00:00.750) 0:06:59.928 ********* 2025-08-29 17:58:58.069037 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.069044 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.069051 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.069063 | orchestrator | 2025-08-29 17:58:58.069069 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 17:58:58.069080 | orchestrator | Friday 29 August 2025 17:53:30 +0000 (0:00:00.681) 0:07:00.609 ********* 2025-08-29 17:58:58.069090 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.069098 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.069105 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.069112 | orchestrator | 2025-08-29 17:58:58.069120 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 17:58:58.069127 | orchestrator | Friday 29 August 2025 17:53:30 +0000 (0:00:00.565) 0:07:01.175 ********* 2025-08-29 17:58:58.069133 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.069141 | orchestrator | 2025-08-29 17:58:58.069148 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 17:58:58.069156 | orchestrator | Friday 29 August 2025 17:53:31 +0000 (0:00:00.729) 0:07:01.904 ********* 2025-08-29 17:58:58.069162 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.069169 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.069177 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.069184 | orchestrator | 2025-08-29 17:58:58.069191 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 17:58:58.069199 | orchestrator | Friday 29 August 2025 17:53:32 +0000 (0:00:00.717) 0:07:02.622 ********* 2025-08-29 17:58:58.069203 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.069208 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.069212 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.069217 | orchestrator | 2025-08-29 17:58:58.069221 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 17:58:58.069226 | orchestrator | Friday 29 August 2025 17:53:32 +0000 (0:00:00.378) 0:07:03.001 ********* 2025-08-29 17:58:58.069230 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.069235 | orchestrator | 2025-08-29 17:58:58.069239 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 17:58:58.069244 | orchestrator | Friday 29 August 2025 17:53:33 +0000 (0:00:00.562) 0:07:03.563 ********* 2025-08-29 17:58:58.069258 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.069262 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.069304 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.069309 | orchestrator | 2025-08-29 17:58:58.069314 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 17:58:58.069318 | orchestrator | Friday 29 August 2025 17:53:34 +0000 (0:00:01.588) 0:07:05.152 ********* 2025-08-29 17:58:58.069323 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.069327 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.069332 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.069336 | orchestrator | 2025-08-29 17:58:58.069341 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 17:58:58.069345 | orchestrator | Friday 29 August 2025 17:53:35 +0000 (0:00:01.192) 0:07:06.344 ********* 2025-08-29 17:58:58.069350 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.069354 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.069358 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.069363 | orchestrator | 2025-08-29 17:58:58.069367 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 17:58:58.069376 | orchestrator | Friday 29 August 2025 17:53:37 +0000 (0:00:01.754) 0:07:08.099 ********* 2025-08-29 17:58:58.069380 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.069385 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.069390 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.069396 | orchestrator | 2025-08-29 17:58:58.069404 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 17:58:58.069411 | orchestrator | Friday 29 August 2025 17:53:39 +0000 (0:00:01.968) 0:07:10.068 ********* 2025-08-29 17:58:58.069425 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.069432 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.069439 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 17:58:58.069446 | orchestrator | 2025-08-29 17:58:58.069453 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 17:58:58.069460 | orchestrator | Friday 29 August 2025 17:53:40 +0000 (0:00:00.745) 0:07:10.814 ********* 2025-08-29 17:58:58.069467 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 17:58:58.069475 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 17:58:58.069513 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 17:58:58.069520 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 17:58:58.069525 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.069529 | orchestrator | 2025-08-29 17:58:58.069534 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 17:58:58.069538 | orchestrator | Friday 29 August 2025 17:54:04 +0000 (0:00:24.092) 0:07:34.907 ********* 2025-08-29 17:58:58.069542 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.069547 | orchestrator | 2025-08-29 17:58:58.069551 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 17:58:58.069556 | orchestrator | Friday 29 August 2025 17:54:05 +0000 (0:00:01.206) 0:07:36.113 ********* 2025-08-29 17:58:58.069560 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.069564 | orchestrator | 2025-08-29 17:58:58.069569 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 17:58:58.069573 | orchestrator | Friday 29 August 2025 17:54:05 +0000 (0:00:00.353) 0:07:36.467 ********* 2025-08-29 17:58:58.069578 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.069582 | orchestrator | 2025-08-29 17:58:58.069586 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 17:58:58.069591 | orchestrator | Friday 29 August 2025 17:54:06 +0000 (0:00:00.176) 0:07:36.644 ********* 2025-08-29 17:58:58.069595 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 17:58:58.069600 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 17:58:58.069604 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 17:58:58.069609 | orchestrator | 2025-08-29 17:58:58.069613 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 17:58:58.069618 | orchestrator | Friday 29 August 2025 17:54:12 +0000 (0:00:06.529) 0:07:43.173 ********* 2025-08-29 17:58:58.069622 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 17:58:58.069626 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 17:58:58.069631 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 17:58:58.069635 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 17:58:58.069640 | orchestrator | 2025-08-29 17:58:58.069644 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:58:58.069648 | orchestrator | Friday 29 August 2025 17:54:17 +0000 (0:00:05.078) 0:07:48.252 ********* 2025-08-29 17:58:58.069653 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.069657 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.069662 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.069666 | orchestrator | 2025-08-29 17:58:58.069670 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 17:58:58.069675 | orchestrator | Friday 29 August 2025 17:54:18 +0000 (0:00:00.761) 0:07:49.013 ********* 2025-08-29 17:58:58.069684 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 17:58:58.069689 | orchestrator | 2025-08-29 17:58:58.069693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 17:58:58.069698 | orchestrator | Friday 29 August 2025 17:54:19 +0000 (0:00:00.557) 0:07:49.571 ********* 2025-08-29 17:58:58.069702 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.069706 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.069711 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.069715 | orchestrator | 2025-08-29 17:58:58.069720 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 17:58:58.069724 | orchestrator | Friday 29 August 2025 17:54:19 +0000 (0:00:00.653) 0:07:50.224 ********* 2025-08-29 17:58:58.069728 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.069733 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.069737 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.069742 | orchestrator | 2025-08-29 17:58:58.069746 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 17:58:58.069750 | orchestrator | Friday 29 August 2025 17:54:20 +0000 (0:00:01.158) 0:07:51.382 ********* 2025-08-29 17:58:58.069755 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 17:58:58.069759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 17:58:58.069764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 17:58:58.069772 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.069776 | orchestrator | 2025-08-29 17:58:58.069781 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 17:58:58.069785 | orchestrator | Friday 29 August 2025 17:54:21 +0000 (0:00:00.656) 0:07:52.038 ********* 2025-08-29 17:58:58.069790 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.069794 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.069798 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.069803 | orchestrator | 2025-08-29 17:58:58.069807 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 17:58:58.069811 | orchestrator | 2025-08-29 17:58:58.069816 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.069820 | orchestrator | Friday 29 August 2025 17:54:22 +0000 (0:00:00.861) 0:07:52.900 ********* 2025-08-29 17:58:58.069825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.069829 | orchestrator | 2025-08-29 17:58:58.069833 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.069837 | orchestrator | Friday 29 August 2025 17:54:22 +0000 (0:00:00.538) 0:07:53.439 ********* 2025-08-29 17:58:58.069856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.069861 | orchestrator | 2025-08-29 17:58:58.069865 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.069869 | orchestrator | Friday 29 August 2025 17:54:23 +0000 (0:00:00.788) 0:07:54.228 ********* 2025-08-29 17:58:58.069873 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.069877 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.069881 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.069885 | orchestrator | 2025-08-29 17:58:58.069889 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.069893 | orchestrator | Friday 29 August 2025 17:54:24 +0000 (0:00:00.347) 0:07:54.576 ********* 2025-08-29 17:58:58.069897 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.069901 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.069905 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.069909 | orchestrator | 2025-08-29 17:58:58.069913 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.069921 | orchestrator | Friday 29 August 2025 17:54:24 +0000 (0:00:00.668) 0:07:55.244 ********* 2025-08-29 17:58:58.069925 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.069929 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.069933 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.069937 | orchestrator | 2025-08-29 17:58:58.069941 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.069945 | orchestrator | Friday 29 August 2025 17:54:25 +0000 (0:00:00.753) 0:07:55.998 ********* 2025-08-29 17:58:58.069949 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.069953 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.069957 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.069961 | orchestrator | 2025-08-29 17:58:58.069965 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.069969 | orchestrator | Friday 29 August 2025 17:54:26 +0000 (0:00:00.990) 0:07:56.989 ********* 2025-08-29 17:58:58.069973 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.069977 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.069981 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.069985 | orchestrator | 2025-08-29 17:58:58.069989 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.069993 | orchestrator | Friday 29 August 2025 17:54:26 +0000 (0:00:00.381) 0:07:57.370 ********* 2025-08-29 17:58:58.069997 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070001 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070005 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070009 | orchestrator | 2025-08-29 17:58:58.070035 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.070040 | orchestrator | Friday 29 August 2025 17:54:27 +0000 (0:00:00.375) 0:07:57.746 ********* 2025-08-29 17:58:58.070044 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070048 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070052 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070056 | orchestrator | 2025-08-29 17:58:58.070060 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.070064 | orchestrator | Friday 29 August 2025 17:54:27 +0000 (0:00:00.335) 0:07:58.081 ********* 2025-08-29 17:58:58.070069 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070073 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070077 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070081 | orchestrator | 2025-08-29 17:58:58.070085 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.070089 | orchestrator | Friday 29 August 2025 17:54:28 +0000 (0:00:00.993) 0:07:59.074 ********* 2025-08-29 17:58:58.070093 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070097 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070101 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070105 | orchestrator | 2025-08-29 17:58:58.070109 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.070113 | orchestrator | Friday 29 August 2025 17:54:29 +0000 (0:00:00.886) 0:07:59.960 ********* 2025-08-29 17:58:58.070117 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070121 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070129 | orchestrator | 2025-08-29 17:58:58.070133 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.070138 | orchestrator | Friday 29 August 2025 17:54:29 +0000 (0:00:00.459) 0:08:00.419 ********* 2025-08-29 17:58:58.070142 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070146 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070150 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070154 | orchestrator | 2025-08-29 17:58:58.070158 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.070162 | orchestrator | Friday 29 August 2025 17:54:30 +0000 (0:00:00.374) 0:08:00.794 ********* 2025-08-29 17:58:58.070174 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070178 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070182 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070186 | orchestrator | 2025-08-29 17:58:58.070190 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.070194 | orchestrator | Friday 29 August 2025 17:54:30 +0000 (0:00:00.647) 0:08:01.441 ********* 2025-08-29 17:58:58.070198 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070202 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070206 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070210 | orchestrator | 2025-08-29 17:58:58.070214 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.070218 | orchestrator | Friday 29 August 2025 17:54:31 +0000 (0:00:00.348) 0:08:01.790 ********* 2025-08-29 17:58:58.070222 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070226 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070230 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070234 | orchestrator | 2025-08-29 17:58:58.070238 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.070243 | orchestrator | Friday 29 August 2025 17:54:31 +0000 (0:00:00.394) 0:08:02.185 ********* 2025-08-29 17:58:58.070247 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070251 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070255 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070259 | orchestrator | 2025-08-29 17:58:58.070276 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.070281 | orchestrator | Friday 29 August 2025 17:54:32 +0000 (0:00:00.330) 0:08:02.515 ********* 2025-08-29 17:58:58.070285 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070289 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070293 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070297 | orchestrator | 2025-08-29 17:58:58.070301 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.070305 | orchestrator | Friday 29 August 2025 17:54:32 +0000 (0:00:00.329) 0:08:02.845 ********* 2025-08-29 17:58:58.070309 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070313 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070317 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070321 | orchestrator | 2025-08-29 17:58:58.070325 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.070329 | orchestrator | Friday 29 August 2025 17:54:32 +0000 (0:00:00.627) 0:08:03.472 ********* 2025-08-29 17:58:58.070333 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070337 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070341 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070345 | orchestrator | 2025-08-29 17:58:58.070349 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.070353 | orchestrator | Friday 29 August 2025 17:54:33 +0000 (0:00:00.357) 0:08:03.830 ********* 2025-08-29 17:58:58.070357 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070361 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070365 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070369 | orchestrator | 2025-08-29 17:58:58.070373 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 17:58:58.070377 | orchestrator | Friday 29 August 2025 17:54:33 +0000 (0:00:00.617) 0:08:04.447 ********* 2025-08-29 17:58:58.070381 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070385 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070389 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070392 | orchestrator | 2025-08-29 17:58:58.070397 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 17:58:58.070401 | orchestrator | Friday 29 August 2025 17:54:34 +0000 (0:00:00.606) 0:08:05.054 ********* 2025-08-29 17:58:58.070405 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 17:58:58.070413 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 17:58:58.070417 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 17:58:58.070421 | orchestrator | 2025-08-29 17:58:58.070425 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 17:58:58.070429 | orchestrator | Friday 29 August 2025 17:54:35 +0000 (0:00:00.749) 0:08:05.803 ********* 2025-08-29 17:58:58.070433 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.070437 | orchestrator | 2025-08-29 17:58:58.070441 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 17:58:58.070445 | orchestrator | Friday 29 August 2025 17:54:35 +0000 (0:00:00.580) 0:08:06.384 ********* 2025-08-29 17:58:58.070449 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070453 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070457 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070461 | orchestrator | 2025-08-29 17:58:58.070465 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 17:58:58.070469 | orchestrator | Friday 29 August 2025 17:54:36 +0000 (0:00:00.646) 0:08:07.031 ********* 2025-08-29 17:58:58.070473 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070477 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070481 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070485 | orchestrator | 2025-08-29 17:58:58.070489 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 17:58:58.070493 | orchestrator | Friday 29 August 2025 17:54:36 +0000 (0:00:00.369) 0:08:07.401 ********* 2025-08-29 17:58:58.070497 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070501 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070505 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070509 | orchestrator | 2025-08-29 17:58:58.070513 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 17:58:58.070517 | orchestrator | Friday 29 August 2025 17:54:37 +0000 (0:00:00.688) 0:08:08.090 ********* 2025-08-29 17:58:58.070521 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070525 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070529 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070533 | orchestrator | 2025-08-29 17:58:58.070537 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 17:58:58.070543 | orchestrator | Friday 29 August 2025 17:54:37 +0000 (0:00:00.374) 0:08:08.465 ********* 2025-08-29 17:58:58.070548 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 17:58:58.070552 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 17:58:58.070556 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 17:58:58.070560 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 17:58:58.070564 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 17:58:58.070568 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 17:58:58.070572 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 17:58:58.070576 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 17:58:58.070584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 17:58:58.070588 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 17:58:58.070592 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 17:58:58.070596 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 17:58:58.070604 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 17:58:58.070608 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 17:58:58.070612 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 17:58:58.070616 | orchestrator | 2025-08-29 17:58:58.070620 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 17:58:58.070624 | orchestrator | Friday 29 August 2025 17:54:41 +0000 (0:00:03.197) 0:08:11.662 ********* 2025-08-29 17:58:58.070628 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070636 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070640 | orchestrator | 2025-08-29 17:58:58.070644 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 17:58:58.070648 | orchestrator | Friday 29 August 2025 17:54:41 +0000 (0:00:00.376) 0:08:12.039 ********* 2025-08-29 17:58:58.070652 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.070656 | orchestrator | 2025-08-29 17:58:58.070660 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 17:58:58.070664 | orchestrator | Friday 29 August 2025 17:54:42 +0000 (0:00:00.635) 0:08:12.675 ********* 2025-08-29 17:58:58.070668 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 17:58:58.070672 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 17:58:58.070676 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 17:58:58.070680 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 17:58:58.070684 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 17:58:58.070688 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 17:58:58.070692 | orchestrator | 2025-08-29 17:58:58.070696 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 17:58:58.070700 | orchestrator | Friday 29 August 2025 17:54:43 +0000 (0:00:01.299) 0:08:13.975 ********* 2025-08-29 17:58:58.070704 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.070708 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:58:58.070712 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:58:58.070716 | orchestrator | 2025-08-29 17:58:58.070720 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:58:58.070724 | orchestrator | Friday 29 August 2025 17:54:45 +0000 (0:00:01.956) 0:08:15.931 ********* 2025-08-29 17:58:58.070728 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:58:58.070732 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:58:58.070736 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.070740 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:58:58.070744 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 17:58:58.070748 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.070752 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:58:58.070756 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 17:58:58.070760 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.070764 | orchestrator | 2025-08-29 17:58:58.070768 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 17:58:58.070772 | orchestrator | Friday 29 August 2025 17:54:46 +0000 (0:00:01.212) 0:08:17.143 ********* 2025-08-29 17:58:58.070776 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.070780 | orchestrator | 2025-08-29 17:58:58.070784 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 17:58:58.070788 | orchestrator | Friday 29 August 2025 17:54:48 +0000 (0:00:02.137) 0:08:19.281 ********* 2025-08-29 17:58:58.070795 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.070799 | orchestrator | 2025-08-29 17:58:58.070806 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 17:58:58.070810 | orchestrator | Friday 29 August 2025 17:54:49 +0000 (0:00:00.565) 0:08:19.847 ********* 2025-08-29 17:58:58.070814 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1b4aa328-f83b-56f5-ada4-b8257b659e12', 'data_vg': 'ceph-1b4aa328-f83b-56f5-ada4-b8257b659e12'}) 2025-08-29 17:58:58.070819 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-76bb4758-fd8e-569b-82df-4997dbff6ccd', 'data_vg': 'ceph-76bb4758-fd8e-569b-82df-4997dbff6ccd'}) 2025-08-29 17:58:58.070823 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129', 'data_vg': 'ceph-7e0f67bb-93ba-55c2-b7d3-c3a17e91e129'}) 2025-08-29 17:58:58.070827 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ab048149-1b6d-515a-8df0-d9a146565eca', 'data_vg': 'ceph-ab048149-1b6d-515a-8df0-d9a146565eca'}) 2025-08-29 17:58:58.070834 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-90167df7-514b-5586-921e-4d7a2964fdd2', 'data_vg': 'ceph-90167df7-514b-5586-921e-4d7a2964fdd2'}) 2025-08-29 17:58:58.070838 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-756a9a3b-59dc-526e-9851-f6b5408065e4', 'data_vg': 'ceph-756a9a3b-59dc-526e-9851-f6b5408065e4'}) 2025-08-29 17:58:58.070842 | orchestrator | 2025-08-29 17:58:58.070846 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 17:58:58.070850 | orchestrator | Friday 29 August 2025 17:55:29 +0000 (0:00:40.481) 0:09:00.328 ********* 2025-08-29 17:58:58.070854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.070858 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.070862 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.070866 | orchestrator | 2025-08-29 17:58:58.070870 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 17:58:58.070874 | orchestrator | Friday 29 August 2025 17:55:30 +0000 (0:00:00.422) 0:09:00.751 ********* 2025-08-29 17:58:58.070878 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.070882 | orchestrator | 2025-08-29 17:58:58.070886 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 17:58:58.070890 | orchestrator | Friday 29 August 2025 17:55:30 +0000 (0:00:00.583) 0:09:01.334 ********* 2025-08-29 17:58:58.070894 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070898 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070902 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070906 | orchestrator | 2025-08-29 17:58:58.070910 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 17:58:58.070914 | orchestrator | Friday 29 August 2025 17:55:31 +0000 (0:00:01.060) 0:09:02.394 ********* 2025-08-29 17:58:58.070918 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.070922 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.070926 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.070930 | orchestrator | 2025-08-29 17:58:58.070934 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 17:58:58.070938 | orchestrator | Friday 29 August 2025 17:55:34 +0000 (0:00:02.481) 0:09:04.876 ********* 2025-08-29 17:58:58.070942 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.070946 | orchestrator | 2025-08-29 17:58:58.070950 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 17:58:58.070954 | orchestrator | Friday 29 August 2025 17:55:34 +0000 (0:00:00.576) 0:09:05.453 ********* 2025-08-29 17:58:58.070958 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.070962 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.070972 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.070976 | orchestrator | 2025-08-29 17:58:58.070980 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 17:58:58.070984 | orchestrator | Friday 29 August 2025 17:55:36 +0000 (0:00:01.556) 0:09:07.010 ********* 2025-08-29 17:58:58.070988 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.070992 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.070996 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.071000 | orchestrator | 2025-08-29 17:58:58.071004 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 17:58:58.071008 | orchestrator | Friday 29 August 2025 17:55:37 +0000 (0:00:01.184) 0:09:08.194 ********* 2025-08-29 17:58:58.071012 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.071016 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.071020 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.071024 | orchestrator | 2025-08-29 17:58:58.071028 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 17:58:58.071032 | orchestrator | Friday 29 August 2025 17:55:39 +0000 (0:00:01.784) 0:09:09.978 ********* 2025-08-29 17:58:58.071036 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071040 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071044 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071048 | orchestrator | 2025-08-29 17:58:58.071052 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 17:58:58.071056 | orchestrator | Friday 29 August 2025 17:55:39 +0000 (0:00:00.324) 0:09:10.303 ********* 2025-08-29 17:58:58.071060 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071064 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071068 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071072 | orchestrator | 2025-08-29 17:58:58.071076 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 17:58:58.071080 | orchestrator | Friday 29 August 2025 17:55:40 +0000 (0:00:00.640) 0:09:10.943 ********* 2025-08-29 17:58:58.071083 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 17:58:58.071087 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-08-29 17:58:58.071094 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-08-29 17:58:58.071098 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-08-29 17:58:58.071102 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-08-29 17:58:58.071106 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-08-29 17:58:58.071110 | orchestrator | 2025-08-29 17:58:58.071114 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 17:58:58.071118 | orchestrator | Friday 29 August 2025 17:55:41 +0000 (0:00:00.993) 0:09:11.937 ********* 2025-08-29 17:58:58.071122 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 17:58:58.071126 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 17:58:58.071130 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 17:58:58.071134 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-08-29 17:58:58.071138 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-08-29 17:58:58.071142 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 17:58:58.071146 | orchestrator | 2025-08-29 17:58:58.071150 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 17:58:58.071154 | orchestrator | Friday 29 August 2025 17:55:43 +0000 (0:00:02.159) 0:09:14.096 ********* 2025-08-29 17:58:58.071158 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-08-29 17:58:58.071162 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-08-29 17:58:58.071168 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 17:58:58.071172 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-08-29 17:58:58.071176 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-08-29 17:58:58.071180 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 17:58:58.071184 | orchestrator | 2025-08-29 17:58:58.071188 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 17:58:58.071196 | orchestrator | Friday 29 August 2025 17:55:47 +0000 (0:00:03.501) 0:09:17.598 ********* 2025-08-29 17:58:58.071200 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071204 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071208 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.071212 | orchestrator | 2025-08-29 17:58:58.071216 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 17:58:58.071220 | orchestrator | Friday 29 August 2025 17:55:50 +0000 (0:00:02.934) 0:09:20.532 ********* 2025-08-29 17:58:58.071224 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071228 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071232 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 17:58:58.071236 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.071240 | orchestrator | 2025-08-29 17:58:58.071244 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 17:58:58.071248 | orchestrator | Friday 29 August 2025 17:56:02 +0000 (0:00:12.521) 0:09:33.054 ********* 2025-08-29 17:58:58.071252 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071259 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071263 | orchestrator | 2025-08-29 17:58:58.071295 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:58:58.071300 | orchestrator | Friday 29 August 2025 17:56:03 +0000 (0:00:01.168) 0:09:34.222 ********* 2025-08-29 17:58:58.071303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071308 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071312 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071315 | orchestrator | 2025-08-29 17:58:58.071319 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 17:58:58.071323 | orchestrator | Friday 29 August 2025 17:56:04 +0000 (0:00:00.385) 0:09:34.607 ********* 2025-08-29 17:58:58.071327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.071332 | orchestrator | 2025-08-29 17:58:58.071336 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 17:58:58.071340 | orchestrator | Friday 29 August 2025 17:56:04 +0000 (0:00:00.577) 0:09:35.185 ********* 2025-08-29 17:58:58.071344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.071348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.071352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.071356 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071360 | orchestrator | 2025-08-29 17:58:58.071364 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 17:58:58.071368 | orchestrator | Friday 29 August 2025 17:56:05 +0000 (0:00:01.055) 0:09:36.241 ********* 2025-08-29 17:58:58.071372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071375 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071379 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071383 | orchestrator | 2025-08-29 17:58:58.071387 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 17:58:58.071392 | orchestrator | Friday 29 August 2025 17:56:06 +0000 (0:00:00.365) 0:09:36.606 ********* 2025-08-29 17:58:58.071395 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071399 | orchestrator | 2025-08-29 17:58:58.071403 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 17:58:58.071407 | orchestrator | Friday 29 August 2025 17:56:06 +0000 (0:00:00.267) 0:09:36.873 ********* 2025-08-29 17:58:58.071411 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071415 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071419 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071428 | orchestrator | 2025-08-29 17:58:58.071432 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 17:58:58.071436 | orchestrator | Friday 29 August 2025 17:56:06 +0000 (0:00:00.409) 0:09:37.282 ********* 2025-08-29 17:58:58.071440 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071444 | orchestrator | 2025-08-29 17:58:58.071448 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 17:58:58.071464 | orchestrator | Friday 29 August 2025 17:56:07 +0000 (0:00:00.226) 0:09:37.509 ********* 2025-08-29 17:58:58.071469 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071473 | orchestrator | 2025-08-29 17:58:58.071477 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 17:58:58.071481 | orchestrator | Friday 29 August 2025 17:56:07 +0000 (0:00:00.253) 0:09:37.762 ********* 2025-08-29 17:58:58.071485 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071488 | orchestrator | 2025-08-29 17:58:58.071492 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 17:58:58.071496 | orchestrator | Friday 29 August 2025 17:56:07 +0000 (0:00:00.151) 0:09:37.914 ********* 2025-08-29 17:58:58.071500 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071504 | orchestrator | 2025-08-29 17:58:58.071508 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 17:58:58.071512 | orchestrator | Friday 29 August 2025 17:56:07 +0000 (0:00:00.229) 0:09:38.143 ********* 2025-08-29 17:58:58.071516 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071520 | orchestrator | 2025-08-29 17:58:58.071524 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 17:58:58.071528 | orchestrator | Friday 29 August 2025 17:56:08 +0000 (0:00:00.887) 0:09:39.031 ********* 2025-08-29 17:58:58.071535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.071539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.071543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.071547 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071551 | orchestrator | 2025-08-29 17:58:58.071555 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 17:58:58.071558 | orchestrator | Friday 29 August 2025 17:56:08 +0000 (0:00:00.413) 0:09:39.444 ********* 2025-08-29 17:58:58.071562 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071566 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071569 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071573 | orchestrator | 2025-08-29 17:58:58.071577 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 17:58:58.071580 | orchestrator | Friday 29 August 2025 17:56:09 +0000 (0:00:00.353) 0:09:39.797 ********* 2025-08-29 17:58:58.071584 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071587 | orchestrator | 2025-08-29 17:58:58.071591 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 17:58:58.071595 | orchestrator | Friday 29 August 2025 17:56:09 +0000 (0:00:00.241) 0:09:40.039 ********* 2025-08-29 17:58:58.071598 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071602 | orchestrator | 2025-08-29 17:58:58.071606 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 17:58:58.071609 | orchestrator | 2025-08-29 17:58:58.071613 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.071616 | orchestrator | Friday 29 August 2025 17:56:10 +0000 (0:00:00.766) 0:09:40.805 ********* 2025-08-29 17:58:58.071620 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.071624 | orchestrator | 2025-08-29 17:58:58.071628 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.071632 | orchestrator | Friday 29 August 2025 17:56:11 +0000 (0:00:01.371) 0:09:42.176 ********* 2025-08-29 17:58:58.071639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.071643 | orchestrator | 2025-08-29 17:58:58.071647 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.071650 | orchestrator | Friday 29 August 2025 17:56:13 +0000 (0:00:01.370) 0:09:43.547 ********* 2025-08-29 17:58:58.071654 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.071657 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071661 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071665 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.071668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071672 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.071676 | orchestrator | 2025-08-29 17:58:58.071679 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.071683 | orchestrator | Friday 29 August 2025 17:56:14 +0000 (0:00:00.972) 0:09:44.519 ********* 2025-08-29 17:58:58.071687 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.071690 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.071694 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.071698 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.071701 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.071705 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.071708 | orchestrator | 2025-08-29 17:58:58.071712 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.071716 | orchestrator | Friday 29 August 2025 17:56:15 +0000 (0:00:01.023) 0:09:45.542 ********* 2025-08-29 17:58:58.071719 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.071723 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.071727 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.071730 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.071734 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.071738 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.071741 | orchestrator | 2025-08-29 17:58:58.071745 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.071748 | orchestrator | Friday 29 August 2025 17:56:16 +0000 (0:00:01.358) 0:09:46.901 ********* 2025-08-29 17:58:58.071752 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.071756 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.071759 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.071763 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.071767 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.071770 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.071774 | orchestrator | 2025-08-29 17:58:58.071780 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.071783 | orchestrator | Friday 29 August 2025 17:56:17 +0000 (0:00:00.992) 0:09:47.893 ********* 2025-08-29 17:58:58.071787 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.071791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071794 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071798 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.071802 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071805 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.071809 | orchestrator | 2025-08-29 17:58:58.071813 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.071816 | orchestrator | Friday 29 August 2025 17:56:18 +0000 (0:00:01.055) 0:09:48.948 ********* 2025-08-29 17:58:58.071820 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.071824 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.071827 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.071831 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071835 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071838 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071842 | orchestrator | 2025-08-29 17:58:58.071849 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.071853 | orchestrator | Friday 29 August 2025 17:56:19 +0000 (0:00:00.604) 0:09:49.553 ********* 2025-08-29 17:58:58.071859 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.071863 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.071866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.071870 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071873 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071877 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071881 | orchestrator | 2025-08-29 17:58:58.071884 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.071888 | orchestrator | Friday 29 August 2025 17:56:19 +0000 (0:00:00.885) 0:09:50.439 ********* 2025-08-29 17:58:58.071892 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.071895 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.071899 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.071903 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.071906 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.071910 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.071913 | orchestrator | 2025-08-29 17:58:58.071917 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.071921 | orchestrator | Friday 29 August 2025 17:56:21 +0000 (0:00:01.053) 0:09:51.492 ********* 2025-08-29 17:58:58.071924 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.071928 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.071932 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.071935 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.071939 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.071942 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.071946 | orchestrator | 2025-08-29 17:58:58.071950 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.071953 | orchestrator | Friday 29 August 2025 17:56:22 +0000 (0:00:01.721) 0:09:53.213 ********* 2025-08-29 17:58:58.071957 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.071961 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.071964 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.071968 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.071971 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.071975 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.071979 | orchestrator | 2025-08-29 17:58:58.071982 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.071986 | orchestrator | Friday 29 August 2025 17:56:23 +0000 (0:00:00.647) 0:09:53.861 ********* 2025-08-29 17:58:58.071990 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.071993 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.071997 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.072001 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072004 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072008 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072012 | orchestrator | 2025-08-29 17:58:58.072015 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.072019 | orchestrator | Friday 29 August 2025 17:56:24 +0000 (0:00:00.940) 0:09:54.801 ********* 2025-08-29 17:58:58.072023 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.072026 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.072030 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.072034 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072037 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072041 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072044 | orchestrator | 2025-08-29 17:58:58.072048 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.072052 | orchestrator | Friday 29 August 2025 17:56:25 +0000 (0:00:00.752) 0:09:55.553 ********* 2025-08-29 17:58:58.072055 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.072063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.072066 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.072070 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072074 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072077 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072081 | orchestrator | 2025-08-29 17:58:58.072084 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.072088 | orchestrator | Friday 29 August 2025 17:56:26 +0000 (0:00:01.120) 0:09:56.674 ********* 2025-08-29 17:58:58.072092 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.072095 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.072099 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.072103 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072106 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072110 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072114 | orchestrator | 2025-08-29 17:58:58.072117 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.072121 | orchestrator | Friday 29 August 2025 17:56:26 +0000 (0:00:00.692) 0:09:57.367 ********* 2025-08-29 17:58:58.072125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.072128 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.072132 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.072135 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072139 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072143 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072146 | orchestrator | 2025-08-29 17:58:58.072152 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.072156 | orchestrator | Friday 29 August 2025 17:56:27 +0000 (0:00:00.999) 0:09:58.367 ********* 2025-08-29 17:58:58.072160 | orchestrator | skipping: [testbed-node-0] 2025-08-29 17:58:58.072163 | orchestrator | skipping: [testbed-node-1] 2025-08-29 17:58:58.072167 | orchestrator | skipping: [testbed-node-2] 2025-08-29 17:58:58.072170 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072174 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072178 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072181 | orchestrator | 2025-08-29 17:58:58.072185 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.072189 | orchestrator | Friday 29 August 2025 17:56:28 +0000 (0:00:00.709) 0:09:59.077 ********* 2025-08-29 17:58:58.072192 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072196 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.072200 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.072203 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072207 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072211 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072214 | orchestrator | 2025-08-29 17:58:58.072218 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.072224 | orchestrator | Friday 29 August 2025 17:56:29 +0000 (0:00:00.994) 0:10:00.071 ********* 2025-08-29 17:58:58.072228 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072232 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.072235 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.072239 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072242 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072246 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072250 | orchestrator | 2025-08-29 17:58:58.072253 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.072257 | orchestrator | Friday 29 August 2025 17:56:30 +0000 (0:00:00.668) 0:10:00.740 ********* 2025-08-29 17:58:58.072261 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072264 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.072277 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.072281 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072285 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072288 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072296 | orchestrator | 2025-08-29 17:58:58.072300 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 17:58:58.072303 | orchestrator | Friday 29 August 2025 17:56:31 +0000 (0:00:01.651) 0:10:02.391 ********* 2025-08-29 17:58:58.072307 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.072311 | orchestrator | 2025-08-29 17:58:58.072315 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 17:58:58.072318 | orchestrator | Friday 29 August 2025 17:56:36 +0000 (0:00:04.175) 0:10:06.567 ********* 2025-08-29 17:58:58.072322 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072326 | orchestrator | 2025-08-29 17:58:58.072329 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 17:58:58.072333 | orchestrator | Friday 29 August 2025 17:56:38 +0000 (0:00:02.494) 0:10:09.061 ********* 2025-08-29 17:58:58.072337 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072340 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.072344 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.072348 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.072351 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.072355 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.072358 | orchestrator | 2025-08-29 17:58:58.072362 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 17:58:58.072366 | orchestrator | Friday 29 August 2025 17:56:40 +0000 (0:00:01.555) 0:10:10.617 ********* 2025-08-29 17:58:58.072369 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.072373 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.072377 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.072380 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.072384 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.072388 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.072391 | orchestrator | 2025-08-29 17:58:58.072395 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 17:58:58.072398 | orchestrator | Friday 29 August 2025 17:56:41 +0000 (0:00:01.193) 0:10:11.811 ********* 2025-08-29 17:58:58.072402 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.072407 | orchestrator | 2025-08-29 17:58:58.072411 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 17:58:58.072414 | orchestrator | Friday 29 August 2025 17:56:42 +0000 (0:00:01.335) 0:10:13.146 ********* 2025-08-29 17:58:58.072418 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.072422 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.072425 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.072429 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.072432 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.072436 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.072440 | orchestrator | 2025-08-29 17:58:58.072443 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 17:58:58.072447 | orchestrator | Friday 29 August 2025 17:56:44 +0000 (0:00:01.532) 0:10:14.678 ********* 2025-08-29 17:58:58.072451 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.072454 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.072458 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.072462 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.072465 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.072469 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.072472 | orchestrator | 2025-08-29 17:58:58.072476 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 17:58:58.072480 | orchestrator | Friday 29 August 2025 17:56:47 +0000 (0:00:03.273) 0:10:17.952 ********* 2025-08-29 17:58:58.072484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.072492 | orchestrator | 2025-08-29 17:58:58.072497 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 17:58:58.072501 | orchestrator | Friday 29 August 2025 17:56:48 +0000 (0:00:01.425) 0:10:19.378 ********* 2025-08-29 17:58:58.072505 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072509 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.072512 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.072516 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072520 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072523 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072527 | orchestrator | 2025-08-29 17:58:58.072531 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 17:58:58.072534 | orchestrator | Friday 29 August 2025 17:56:49 +0000 (0:00:00.884) 0:10:20.262 ********* 2025-08-29 17:58:58.072538 | orchestrator | changed: [testbed-node-0] 2025-08-29 17:58:58.072542 | orchestrator | changed: [testbed-node-2] 2025-08-29 17:58:58.072545 | orchestrator | changed: [testbed-node-1] 2025-08-29 17:58:58.072549 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.072552 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.072556 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.072560 | orchestrator | 2025-08-29 17:58:58.072563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 17:58:58.072567 | orchestrator | Friday 29 August 2025 17:56:51 +0000 (0:00:02.124) 0:10:22.386 ********* 2025-08-29 17:58:58.072573 | orchestrator | ok: [testbed-node-0] 2025-08-29 17:58:58.072577 | orchestrator | ok: [testbed-node-1] 2025-08-29 17:58:58.072580 | orchestrator | ok: [testbed-node-2] 2025-08-29 17:58:58.072584 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072588 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072591 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072595 | orchestrator | 2025-08-29 17:58:58.072598 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 17:58:58.072602 | orchestrator | 2025-08-29 17:58:58.072606 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.072609 | orchestrator | Friday 29 August 2025 17:56:53 +0000 (0:00:01.171) 0:10:23.558 ********* 2025-08-29 17:58:58.072613 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.072617 | orchestrator | 2025-08-29 17:58:58.072620 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.072624 | orchestrator | Friday 29 August 2025 17:56:53 +0000 (0:00:00.801) 0:10:24.359 ********* 2025-08-29 17:58:58.072628 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.072631 | orchestrator | 2025-08-29 17:58:58.072635 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.072639 | orchestrator | Friday 29 August 2025 17:56:54 +0000 (0:00:00.641) 0:10:25.001 ********* 2025-08-29 17:58:58.072642 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072650 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072653 | orchestrator | 2025-08-29 17:58:58.072657 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.072661 | orchestrator | Friday 29 August 2025 17:56:54 +0000 (0:00:00.397) 0:10:25.398 ********* 2025-08-29 17:58:58.072664 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072668 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072672 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072675 | orchestrator | 2025-08-29 17:58:58.072679 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.072683 | orchestrator | Friday 29 August 2025 17:56:56 +0000 (0:00:01.097) 0:10:26.496 ********* 2025-08-29 17:58:58.072686 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072690 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072697 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072700 | orchestrator | 2025-08-29 17:58:58.072704 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.072708 | orchestrator | Friday 29 August 2025 17:56:56 +0000 (0:00:00.741) 0:10:27.238 ********* 2025-08-29 17:58:58.072711 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072715 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072719 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072722 | orchestrator | 2025-08-29 17:58:58.072726 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.072730 | orchestrator | Friday 29 August 2025 17:56:57 +0000 (0:00:00.733) 0:10:27.972 ********* 2025-08-29 17:58:58.072733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072737 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072741 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072744 | orchestrator | 2025-08-29 17:58:58.072748 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.072752 | orchestrator | Friday 29 August 2025 17:56:57 +0000 (0:00:00.334) 0:10:28.306 ********* 2025-08-29 17:58:58.072755 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072759 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072762 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072766 | orchestrator | 2025-08-29 17:58:58.072770 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.072773 | orchestrator | Friday 29 August 2025 17:56:58 +0000 (0:00:00.633) 0:10:28.939 ********* 2025-08-29 17:58:58.072777 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072781 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072784 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072788 | orchestrator | 2025-08-29 17:58:58.072792 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.072795 | orchestrator | Friday 29 August 2025 17:56:58 +0000 (0:00:00.346) 0:10:29.286 ********* 2025-08-29 17:58:58.072799 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072803 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072806 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072810 | orchestrator | 2025-08-29 17:58:58.072814 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.072817 | orchestrator | Friday 29 August 2025 17:56:59 +0000 (0:00:00.735) 0:10:30.022 ********* 2025-08-29 17:58:58.072821 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072827 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072830 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072834 | orchestrator | 2025-08-29 17:58:58.072838 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.072842 | orchestrator | Friday 29 August 2025 17:57:00 +0000 (0:00:00.766) 0:10:30.789 ********* 2025-08-29 17:58:58.072845 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072849 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072853 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072856 | orchestrator | 2025-08-29 17:58:58.072860 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.072864 | orchestrator | Friday 29 August 2025 17:57:00 +0000 (0:00:00.634) 0:10:31.424 ********* 2025-08-29 17:58:58.072867 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072871 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072875 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072878 | orchestrator | 2025-08-29 17:58:58.072882 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.072886 | orchestrator | Friday 29 August 2025 17:57:01 +0000 (0:00:00.346) 0:10:31.771 ********* 2025-08-29 17:58:58.072889 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072893 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072897 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072904 | orchestrator | 2025-08-29 17:58:58.072910 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.072914 | orchestrator | Friday 29 August 2025 17:57:01 +0000 (0:00:00.383) 0:10:32.154 ********* 2025-08-29 17:58:58.072917 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072921 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072925 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072928 | orchestrator | 2025-08-29 17:58:58.072932 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.072936 | orchestrator | Friday 29 August 2025 17:57:02 +0000 (0:00:00.371) 0:10:32.526 ********* 2025-08-29 17:58:58.072939 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.072943 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.072946 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.072950 | orchestrator | 2025-08-29 17:58:58.072954 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.072957 | orchestrator | Friday 29 August 2025 17:57:02 +0000 (0:00:00.666) 0:10:33.193 ********* 2025-08-29 17:58:58.072961 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072965 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072969 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072972 | orchestrator | 2025-08-29 17:58:58.072976 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.072980 | orchestrator | Friday 29 August 2025 17:57:03 +0000 (0:00:00.303) 0:10:33.497 ********* 2025-08-29 17:58:58.072983 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.072987 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.072991 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.072994 | orchestrator | 2025-08-29 17:58:58.072998 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.073002 | orchestrator | Friday 29 August 2025 17:57:03 +0000 (0:00:00.326) 0:10:33.824 ********* 2025-08-29 17:58:58.073005 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073009 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073013 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073016 | orchestrator | 2025-08-29 17:58:58.073020 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.073024 | orchestrator | Friday 29 August 2025 17:57:03 +0000 (0:00:00.297) 0:10:34.121 ********* 2025-08-29 17:58:58.073027 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073031 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073035 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073038 | orchestrator | 2025-08-29 17:58:58.073042 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.073046 | orchestrator | Friday 29 August 2025 17:57:04 +0000 (0:00:00.665) 0:10:34.787 ********* 2025-08-29 17:58:58.073049 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073053 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073056 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073060 | orchestrator | 2025-08-29 17:58:58.073064 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 17:58:58.073068 | orchestrator | Friday 29 August 2025 17:57:04 +0000 (0:00:00.614) 0:10:35.401 ********* 2025-08-29 17:58:58.073071 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073075 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073079 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 17:58:58.073082 | orchestrator | 2025-08-29 17:58:58.073086 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 17:58:58.073090 | orchestrator | Friday 29 August 2025 17:57:05 +0000 (0:00:00.829) 0:10:36.231 ********* 2025-08-29 17:58:58.073093 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.073097 | orchestrator | 2025-08-29 17:58:58.073101 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 17:58:58.073104 | orchestrator | Friday 29 August 2025 17:57:07 +0000 (0:00:02.035) 0:10:38.266 ********* 2025-08-29 17:58:58.073113 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 17:58:58.073118 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073122 | orchestrator | 2025-08-29 17:58:58.073125 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 17:58:58.073129 | orchestrator | Friday 29 August 2025 17:57:08 +0000 (0:00:00.243) 0:10:38.510 ********* 2025-08-29 17:58:58.073136 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:58:58.073141 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 17:58:58.073145 | orchestrator | 2025-08-29 17:58:58.073149 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 17:58:58.073153 | orchestrator | Friday 29 August 2025 17:57:15 +0000 (0:00:07.073) 0:10:45.583 ********* 2025-08-29 17:58:58.073156 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 17:58:58.073160 | orchestrator | 2025-08-29 17:58:58.073164 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 17:58:58.073167 | orchestrator | Friday 29 August 2025 17:57:18 +0000 (0:00:03.667) 0:10:49.251 ********* 2025-08-29 17:58:58.073171 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.073175 | orchestrator | 2025-08-29 17:58:58.073180 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 17:58:58.073184 | orchestrator | Friday 29 August 2025 17:57:19 +0000 (0:00:00.578) 0:10:49.829 ********* 2025-08-29 17:58:58.073188 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 17:58:58.073191 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 17:58:58.073195 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 17:58:58.073199 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 17:58:58.073202 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 17:58:58.073206 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 17:58:58.073209 | orchestrator | 2025-08-29 17:58:58.073213 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 17:58:58.073217 | orchestrator | Friday 29 August 2025 17:57:20 +0000 (0:00:01.435) 0:10:51.265 ********* 2025-08-29 17:58:58.073220 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.073224 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:58:58.073228 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:58:58.073231 | orchestrator | 2025-08-29 17:58:58.073235 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:58:58.073239 | orchestrator | Friday 29 August 2025 17:57:22 +0000 (0:00:02.076) 0:10:53.341 ********* 2025-08-29 17:58:58.073242 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:58:58.073246 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 17:58:58.073249 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073253 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:58:58.073257 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:58:58.073264 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073280 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:58:58.073284 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 17:58:58.073287 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073291 | orchestrator | 2025-08-29 17:58:58.073294 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 17:58:58.073298 | orchestrator | Friday 29 August 2025 17:57:24 +0000 (0:00:01.240) 0:10:54.582 ********* 2025-08-29 17:58:58.073302 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073306 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073309 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073313 | orchestrator | 2025-08-29 17:58:58.073317 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 17:58:58.073320 | orchestrator | Friday 29 August 2025 17:57:26 +0000 (0:00:02.635) 0:10:57.218 ********* 2025-08-29 17:58:58.073324 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073331 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073335 | orchestrator | 2025-08-29 17:58:58.073338 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 17:58:58.073342 | orchestrator | Friday 29 August 2025 17:57:27 +0000 (0:00:00.360) 0:10:57.578 ********* 2025-08-29 17:58:58.073346 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.073350 | orchestrator | 2025-08-29 17:58:58.073353 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 17:58:58.073357 | orchestrator | Friday 29 August 2025 17:57:28 +0000 (0:00:00.964) 0:10:58.542 ********* 2025-08-29 17:58:58.073360 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.073364 | orchestrator | 2025-08-29 17:58:58.073368 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 17:58:58.073371 | orchestrator | Friday 29 August 2025 17:57:28 +0000 (0:00:00.535) 0:10:59.078 ********* 2025-08-29 17:58:58.073375 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073379 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073382 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073386 | orchestrator | 2025-08-29 17:58:58.073390 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 17:58:58.073393 | orchestrator | Friday 29 August 2025 17:57:30 +0000 (0:00:01.627) 0:11:00.705 ********* 2025-08-29 17:58:58.073397 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073401 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073406 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073410 | orchestrator | 2025-08-29 17:58:58.073414 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 17:58:58.073417 | orchestrator | Friday 29 August 2025 17:57:31 +0000 (0:00:01.214) 0:11:01.920 ********* 2025-08-29 17:58:58.073421 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073425 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073428 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073432 | orchestrator | 2025-08-29 17:58:58.073435 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 17:58:58.073439 | orchestrator | Friday 29 August 2025 17:57:33 +0000 (0:00:01.798) 0:11:03.719 ********* 2025-08-29 17:58:58.073443 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073446 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073450 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073454 | orchestrator | 2025-08-29 17:58:58.073457 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 17:58:58.073461 | orchestrator | Friday 29 August 2025 17:57:35 +0000 (0:00:01.951) 0:11:05.671 ********* 2025-08-29 17:58:58.073465 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073471 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073475 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073479 | orchestrator | 2025-08-29 17:58:58.073485 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:58:58.073489 | orchestrator | Friday 29 August 2025 17:57:36 +0000 (0:00:01.739) 0:11:07.410 ********* 2025-08-29 17:58:58.073492 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073496 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073500 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073503 | orchestrator | 2025-08-29 17:58:58.073507 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 17:58:58.073511 | orchestrator | Friday 29 August 2025 17:57:37 +0000 (0:00:00.722) 0:11:08.133 ********* 2025-08-29 17:58:58.073514 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.073518 | orchestrator | 2025-08-29 17:58:58.073522 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 17:58:58.073525 | orchestrator | Friday 29 August 2025 17:57:39 +0000 (0:00:01.377) 0:11:09.511 ********* 2025-08-29 17:58:58.073529 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073533 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073536 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073540 | orchestrator | 2025-08-29 17:58:58.073544 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 17:58:58.073547 | orchestrator | Friday 29 August 2025 17:57:39 +0000 (0:00:00.491) 0:11:10.002 ********* 2025-08-29 17:58:58.073551 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.073555 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.073558 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.073562 | orchestrator | 2025-08-29 17:58:58.073565 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 17:58:58.073569 | orchestrator | Friday 29 August 2025 17:57:41 +0000 (0:00:01.530) 0:11:11.532 ********* 2025-08-29 17:58:58.073573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.073576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.073580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.073584 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073587 | orchestrator | 2025-08-29 17:58:58.073591 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 17:58:58.073595 | orchestrator | Friday 29 August 2025 17:57:42 +0000 (0:00:01.267) 0:11:12.799 ********* 2025-08-29 17:58:58.073601 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073607 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073613 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073619 | orchestrator | 2025-08-29 17:58:58.073624 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 17:58:58.073630 | orchestrator | 2025-08-29 17:58:58.073637 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 17:58:58.073642 | orchestrator | Friday 29 August 2025 17:57:43 +0000 (0:00:00.732) 0:11:13.532 ********* 2025-08-29 17:58:58.073648 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.073654 | orchestrator | 2025-08-29 17:58:58.073661 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 17:58:58.073666 | orchestrator | Friday 29 August 2025 17:57:43 +0000 (0:00:00.874) 0:11:14.407 ********* 2025-08-29 17:58:58.073672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.073678 | orchestrator | 2025-08-29 17:58:58.073684 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 17:58:58.073690 | orchestrator | Friday 29 August 2025 17:57:44 +0000 (0:00:00.623) 0:11:15.030 ********* 2025-08-29 17:58:58.073701 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073708 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073714 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073720 | orchestrator | 2025-08-29 17:58:58.073726 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 17:58:58.073729 | orchestrator | Friday 29 August 2025 17:57:44 +0000 (0:00:00.340) 0:11:15.371 ********* 2025-08-29 17:58:58.073733 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073737 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073740 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073744 | orchestrator | 2025-08-29 17:58:58.073748 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 17:58:58.073751 | orchestrator | Friday 29 August 2025 17:57:45 +0000 (0:00:01.061) 0:11:16.432 ********* 2025-08-29 17:58:58.073755 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073759 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073762 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073766 | orchestrator | 2025-08-29 17:58:58.073773 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 17:58:58.073777 | orchestrator | Friday 29 August 2025 17:57:46 +0000 (0:00:00.803) 0:11:17.236 ********* 2025-08-29 17:58:58.073780 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073784 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073787 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073791 | orchestrator | 2025-08-29 17:58:58.073795 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 17:58:58.073798 | orchestrator | Friday 29 August 2025 17:57:47 +0000 (0:00:00.708) 0:11:17.945 ********* 2025-08-29 17:58:58.073802 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073806 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073809 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073813 | orchestrator | 2025-08-29 17:58:58.073817 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 17:58:58.073821 | orchestrator | Friday 29 August 2025 17:57:47 +0000 (0:00:00.359) 0:11:18.305 ********* 2025-08-29 17:58:58.073824 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073828 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073832 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073835 | orchestrator | 2025-08-29 17:58:58.073839 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 17:58:58.073846 | orchestrator | Friday 29 August 2025 17:57:48 +0000 (0:00:00.649) 0:11:18.954 ********* 2025-08-29 17:58:58.073849 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073853 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073857 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073861 | orchestrator | 2025-08-29 17:58:58.073864 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 17:58:58.073868 | orchestrator | Friday 29 August 2025 17:57:48 +0000 (0:00:00.375) 0:11:19.329 ********* 2025-08-29 17:58:58.073872 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073875 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073879 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073883 | orchestrator | 2025-08-29 17:58:58.073886 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 17:58:58.073890 | orchestrator | Friday 29 August 2025 17:57:49 +0000 (0:00:00.726) 0:11:20.056 ********* 2025-08-29 17:58:58.073894 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073897 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073901 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073905 | orchestrator | 2025-08-29 17:58:58.073908 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 17:58:58.073912 | orchestrator | Friday 29 August 2025 17:57:50 +0000 (0:00:00.812) 0:11:20.868 ********* 2025-08-29 17:58:58.073916 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073919 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073927 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073931 | orchestrator | 2025-08-29 17:58:58.073934 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 17:58:58.073938 | orchestrator | Friday 29 August 2025 17:57:51 +0000 (0:00:00.625) 0:11:21.494 ********* 2025-08-29 17:58:58.073942 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.073945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.073949 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.073953 | orchestrator | 2025-08-29 17:58:58.073956 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 17:58:58.073960 | orchestrator | Friday 29 August 2025 17:57:51 +0000 (0:00:00.345) 0:11:21.839 ********* 2025-08-29 17:58:58.073964 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073967 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073971 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073975 | orchestrator | 2025-08-29 17:58:58.073978 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 17:58:58.073982 | orchestrator | Friday 29 August 2025 17:57:51 +0000 (0:00:00.424) 0:11:22.263 ********* 2025-08-29 17:58:58.073986 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.073989 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.073993 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.073997 | orchestrator | 2025-08-29 17:58:58.074000 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 17:58:58.074004 | orchestrator | Friday 29 August 2025 17:57:52 +0000 (0:00:00.459) 0:11:22.722 ********* 2025-08-29 17:58:58.074008 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.074012 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.074031 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.074035 | orchestrator | 2025-08-29 17:58:58.074039 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 17:58:58.074042 | orchestrator | Friday 29 August 2025 17:57:53 +0000 (0:00:00.827) 0:11:23.550 ********* 2025-08-29 17:58:58.074046 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074050 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074053 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074057 | orchestrator | 2025-08-29 17:58:58.074061 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 17:58:58.074064 | orchestrator | Friday 29 August 2025 17:57:53 +0000 (0:00:00.472) 0:11:24.023 ********* 2025-08-29 17:58:58.074068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074072 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074075 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074079 | orchestrator | 2025-08-29 17:58:58.074083 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 17:58:58.074086 | orchestrator | Friday 29 August 2025 17:57:53 +0000 (0:00:00.417) 0:11:24.440 ********* 2025-08-29 17:58:58.074090 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074094 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074097 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074101 | orchestrator | 2025-08-29 17:58:58.074105 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 17:58:58.074108 | orchestrator | Friday 29 August 2025 17:57:54 +0000 (0:00:00.409) 0:11:24.850 ********* 2025-08-29 17:58:58.074112 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.074116 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.074119 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.074123 | orchestrator | 2025-08-29 17:58:58.074127 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 17:58:58.074131 | orchestrator | Friday 29 August 2025 17:57:55 +0000 (0:00:00.741) 0:11:25.591 ********* 2025-08-29 17:58:58.074134 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.074138 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.074142 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.074146 | orchestrator | 2025-08-29 17:58:58.074153 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 17:58:58.074157 | orchestrator | Friday 29 August 2025 17:57:55 +0000 (0:00:00.553) 0:11:26.145 ********* 2025-08-29 17:58:58.074161 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.074165 | orchestrator | 2025-08-29 17:58:58.074168 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 17:58:58.074172 | orchestrator | Friday 29 August 2025 17:57:56 +0000 (0:00:00.843) 0:11:26.989 ********* 2025-08-29 17:58:58.074176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074179 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:58:58.074183 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:58:58.074187 | orchestrator | 2025-08-29 17:58:58.074190 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:58:58.074196 | orchestrator | Friday 29 August 2025 17:57:58 +0000 (0:00:02.139) 0:11:29.128 ********* 2025-08-29 17:58:58.074200 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:58:58.074204 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 17:58:58.074207 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.074211 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:58:58.074215 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 17:58:58.074218 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.074222 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:58:58.074226 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 17:58:58.074229 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.074233 | orchestrator | 2025-08-29 17:58:58.074237 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 17:58:58.074240 | orchestrator | Friday 29 August 2025 17:57:59 +0000 (0:00:01.304) 0:11:30.432 ********* 2025-08-29 17:58:58.074244 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074248 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074251 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074255 | orchestrator | 2025-08-29 17:58:58.074259 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 17:58:58.074262 | orchestrator | Friday 29 August 2025 17:58:00 +0000 (0:00:00.377) 0:11:30.810 ********* 2025-08-29 17:58:58.074292 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.074297 | orchestrator | 2025-08-29 17:58:58.074301 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 17:58:58.074304 | orchestrator | Friday 29 August 2025 17:58:01 +0000 (0:00:00.812) 0:11:31.622 ********* 2025-08-29 17:58:58.074308 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.074312 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.074316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.074320 | orchestrator | 2025-08-29 17:58:58.074324 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 17:58:58.074327 | orchestrator | Friday 29 August 2025 17:58:01 +0000 (0:00:00.804) 0:11:32.427 ********* 2025-08-29 17:58:58.074331 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074335 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 17:58:58.074384 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074399 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 17:58:58.074403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074407 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 17:58:58.074411 | orchestrator | 2025-08-29 17:58:58.074414 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 17:58:58.074418 | orchestrator | Friday 29 August 2025 17:58:06 +0000 (0:00:04.452) 0:11:36.879 ********* 2025-08-29 17:58:58.074422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074426 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:58:58.074429 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074433 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:58:58.074437 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 17:58:58.074440 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 17:58:58.074444 | orchestrator | 2025-08-29 17:58:58.074449 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 17:58:58.074453 | orchestrator | Friday 29 August 2025 17:58:08 +0000 (0:00:02.214) 0:11:39.094 ********* 2025-08-29 17:58:58.074457 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 17:58:58.074461 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.074464 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 17:58:58.074468 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.074472 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 17:58:58.074475 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.074479 | orchestrator | 2025-08-29 17:58:58.074483 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 17:58:58.074486 | orchestrator | Friday 29 August 2025 17:58:10 +0000 (0:00:01.506) 0:11:40.600 ********* 2025-08-29 17:58:58.074490 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 17:58:58.074494 | orchestrator | 2025-08-29 17:58:58.074497 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 17:58:58.074501 | orchestrator | Friday 29 August 2025 17:58:10 +0000 (0:00:00.275) 0:11:40.875 ********* 2025-08-29 17:58:58.074508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074527 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074531 | orchestrator | 2025-08-29 17:58:58.074535 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 17:58:58.074538 | orchestrator | Friday 29 August 2025 17:58:11 +0000 (0:00:00.654) 0:11:41.530 ********* 2025-08-29 17:58:58.074542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 17:58:58.074564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074567 | orchestrator | 2025-08-29 17:58:58.074571 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 17:58:58.074575 | orchestrator | Friday 29 August 2025 17:58:11 +0000 (0:00:00.620) 0:11:42.151 ********* 2025-08-29 17:58:58.074578 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:58:58.074582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:58:58.074586 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:58:58.074590 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:58:58.074594 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 17:58:58.074597 | orchestrator | 2025-08-29 17:58:58.074601 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 17:58:58.074605 | orchestrator | Friday 29 August 2025 17:58:40 +0000 (0:00:29.003) 0:12:11.155 ********* 2025-08-29 17:58:58.074608 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074612 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074616 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074619 | orchestrator | 2025-08-29 17:58:58.074623 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 17:58:58.074627 | orchestrator | Friday 29 August 2025 17:58:41 +0000 (0:00:00.336) 0:12:11.492 ********* 2025-08-29 17:58:58.074631 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074638 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074642 | orchestrator | 2025-08-29 17:58:58.074645 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 17:58:58.074651 | orchestrator | Friday 29 August 2025 17:58:41 +0000 (0:00:00.349) 0:12:11.841 ********* 2025-08-29 17:58:58.074655 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.074658 | orchestrator | 2025-08-29 17:58:58.074662 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 17:58:58.074666 | orchestrator | Friday 29 August 2025 17:58:42 +0000 (0:00:00.848) 0:12:12.689 ********* 2025-08-29 17:58:58.074669 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.074673 | orchestrator | 2025-08-29 17:58:58.074677 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 17:58:58.074680 | orchestrator | Friday 29 August 2025 17:58:42 +0000 (0:00:00.556) 0:12:13.246 ********* 2025-08-29 17:58:58.074684 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.074688 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.074691 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.074695 | orchestrator | 2025-08-29 17:58:58.074702 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 17:58:58.074706 | orchestrator | Friday 29 August 2025 17:58:44 +0000 (0:00:01.797) 0:12:15.044 ********* 2025-08-29 17:58:58.074712 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.074715 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.074719 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.074723 | orchestrator | 2025-08-29 17:58:58.074726 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 17:58:58.074730 | orchestrator | Friday 29 August 2025 17:58:46 +0000 (0:00:01.448) 0:12:16.493 ********* 2025-08-29 17:58:58.074734 | orchestrator | changed: [testbed-node-3] 2025-08-29 17:58:58.074738 | orchestrator | changed: [testbed-node-4] 2025-08-29 17:58:58.074741 | orchestrator | changed: [testbed-node-5] 2025-08-29 17:58:58.074745 | orchestrator | 2025-08-29 17:58:58.074749 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 17:58:58.074752 | orchestrator | Friday 29 August 2025 17:58:48 +0000 (0:00:02.021) 0:12:18.515 ********* 2025-08-29 17:58:58.074756 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.074762 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.074766 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 17:58:58.074770 | orchestrator | 2025-08-29 17:58:58.074774 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 17:58:58.074777 | orchestrator | Friday 29 August 2025 17:58:51 +0000 (0:00:03.305) 0:12:21.821 ********* 2025-08-29 17:58:58.074781 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074785 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074788 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074792 | orchestrator | 2025-08-29 17:58:58.074796 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 17:58:58.074800 | orchestrator | Friday 29 August 2025 17:58:51 +0000 (0:00:00.501) 0:12:22.322 ********* 2025-08-29 17:58:58.074803 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 17:58:58.074807 | orchestrator | 2025-08-29 17:58:58.074811 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 17:58:58.074814 | orchestrator | Friday 29 August 2025 17:58:52 +0000 (0:00:00.912) 0:12:23.235 ********* 2025-08-29 17:58:58.074818 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.074822 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.074826 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.074829 | orchestrator | 2025-08-29 17:58:58.074833 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 17:58:58.074837 | orchestrator | Friday 29 August 2025 17:58:53 +0000 (0:00:00.372) 0:12:23.608 ********* 2025-08-29 17:58:58.074840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074844 | orchestrator | skipping: [testbed-node-4] 2025-08-29 17:58:58.074848 | orchestrator | skipping: [testbed-node-5] 2025-08-29 17:58:58.074852 | orchestrator | 2025-08-29 17:58:58.074855 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 17:58:58.074859 | orchestrator | Friday 29 August 2025 17:58:53 +0000 (0:00:00.383) 0:12:23.991 ********* 2025-08-29 17:58:58.074863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 17:58:58.074867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 17:58:58.074870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 17:58:58.074874 | orchestrator | skipping: [testbed-node-3] 2025-08-29 17:58:58.074878 | orchestrator | 2025-08-29 17:58:58.074882 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 17:58:58.074888 | orchestrator | Friday 29 August 2025 17:58:54 +0000 (0:00:01.074) 0:12:25.066 ********* 2025-08-29 17:58:58.074892 | orchestrator | ok: [testbed-node-3] 2025-08-29 17:58:58.074896 | orchestrator | ok: [testbed-node-4] 2025-08-29 17:58:58.074899 | orchestrator | ok: [testbed-node-5] 2025-08-29 17:58:58.074903 | orchestrator | 2025-08-29 17:58:58.074907 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 17:58:58.074910 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-08-29 17:58:58.074914 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 17:58:58.074920 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 17:58:58.074924 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-08-29 17:58:58.074928 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 17:58:58.074932 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 17:58:58.074935 | orchestrator | 2025-08-29 17:58:58.074939 | orchestrator | 2025-08-29 17:58:58.074943 | orchestrator | 2025-08-29 17:58:58.074947 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 17:58:58.074950 | orchestrator | Friday 29 August 2025 17:58:54 +0000 (0:00:00.294) 0:12:25.361 ********* 2025-08-29 17:58:58.074954 | orchestrator | =============================================================================== 2025-08-29 17:58:58.074958 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 79.35s 2025-08-29 17:58:58.074964 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.48s 2025-08-29 17:58:58.074968 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.00s 2025-08-29 17:58:58.074971 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.09s 2025-08-29 17:58:58.074975 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.21s 2025-08-29 17:58:58.074979 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.03s 2025-08-29 17:58:58.074983 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.52s 2025-08-29 17:58:58.074986 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.90s 2025-08-29 17:58:58.074990 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.73s 2025-08-29 17:58:58.074994 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.07s 2025-08-29 17:58:58.074997 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.06s 2025-08-29 17:58:58.075001 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.53s 2025-08-29 17:58:58.075005 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.08s 2025-08-29 17:58:58.075008 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 4.61s 2025-08-29 17:58:58.075012 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.45s 2025-08-29 17:58:58.075016 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.25s 2025-08-29 17:58:58.075020 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.18s 2025-08-29 17:58:58.075023 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.01s 2025-08-29 17:58:58.075027 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.71s 2025-08-29 17:58:58.075030 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.67s 2025-08-29 17:58:58.075037 | orchestrator | 2025-08-29 17:58:58 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:58:58.075041 | orchestrator | 2025-08-29 17:58:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:01.098417 | orchestrator | 2025-08-29 17:59:01 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:01.102669 | orchestrator | 2025-08-29 17:59:01 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:01.104171 | orchestrator | 2025-08-29 17:59:01 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:01.104507 | orchestrator | 2025-08-29 17:59:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:04.142852 | orchestrator | 2025-08-29 17:59:04 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:04.143199 | orchestrator | 2025-08-29 17:59:04 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:04.144672 | orchestrator | 2025-08-29 17:59:04 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:04.145028 | orchestrator | 2025-08-29 17:59:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:07.197380 | orchestrator | 2025-08-29 17:59:07 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:07.198984 | orchestrator | 2025-08-29 17:59:07 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:07.200549 | orchestrator | 2025-08-29 17:59:07 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:07.200573 | orchestrator | 2025-08-29 17:59:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:10.250407 | orchestrator | 2025-08-29 17:59:10 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:10.251935 | orchestrator | 2025-08-29 17:59:10 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:10.254518 | orchestrator | 2025-08-29 17:59:10 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:10.254613 | orchestrator | 2025-08-29 17:59:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:13.306415 | orchestrator | 2025-08-29 17:59:13 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:13.307396 | orchestrator | 2025-08-29 17:59:13 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:13.308964 | orchestrator | 2025-08-29 17:59:13 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:13.308991 | orchestrator | 2025-08-29 17:59:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:16.354766 | orchestrator | 2025-08-29 17:59:16 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:16.355227 | orchestrator | 2025-08-29 17:59:16 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:16.356622 | orchestrator | 2025-08-29 17:59:16 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:16.356644 | orchestrator | 2025-08-29 17:59:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:19.398204 | orchestrator | 2025-08-29 17:59:19 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:19.399146 | orchestrator | 2025-08-29 17:59:19 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:19.401408 | orchestrator | 2025-08-29 17:59:19 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:19.401454 | orchestrator | 2025-08-29 17:59:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:22.445191 | orchestrator | 2025-08-29 17:59:22 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:22.445317 | orchestrator | 2025-08-29 17:59:22 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:22.445762 | orchestrator | 2025-08-29 17:59:22 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:22.445948 | orchestrator | 2025-08-29 17:59:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:25.501010 | orchestrator | 2025-08-29 17:59:25 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:25.502440 | orchestrator | 2025-08-29 17:59:25 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:25.504719 | orchestrator | 2025-08-29 17:59:25 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:25.504945 | orchestrator | 2025-08-29 17:59:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:28.555033 | orchestrator | 2025-08-29 17:59:28 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:28.556306 | orchestrator | 2025-08-29 17:59:28 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:28.556946 | orchestrator | 2025-08-29 17:59:28 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:28.556969 | orchestrator | 2025-08-29 17:59:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:31.618316 | orchestrator | 2025-08-29 17:59:31 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:31.620539 | orchestrator | 2025-08-29 17:59:31 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:31.622644 | orchestrator | 2025-08-29 17:59:31 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:31.622692 | orchestrator | 2025-08-29 17:59:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:34.664532 | orchestrator | 2025-08-29 17:59:34 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:34.665387 | orchestrator | 2025-08-29 17:59:34 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:34.666422 | orchestrator | 2025-08-29 17:59:34 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:34.666749 | orchestrator | 2025-08-29 17:59:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:37.709006 | orchestrator | 2025-08-29 17:59:37 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:37.709449 | orchestrator | 2025-08-29 17:59:37 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:37.711698 | orchestrator | 2025-08-29 17:59:37 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:37.711889 | orchestrator | 2025-08-29 17:59:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:40.751545 | orchestrator | 2025-08-29 17:59:40 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:40.752360 | orchestrator | 2025-08-29 17:59:40 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:40.752996 | orchestrator | 2025-08-29 17:59:40 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:40.753055 | orchestrator | 2025-08-29 17:59:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:43.804028 | orchestrator | 2025-08-29 17:59:43 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:43.806627 | orchestrator | 2025-08-29 17:59:43 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:43.809402 | orchestrator | 2025-08-29 17:59:43 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:43.809452 | orchestrator | 2025-08-29 17:59:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:46.860678 | orchestrator | 2025-08-29 17:59:46 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:46.862957 | orchestrator | 2025-08-29 17:59:46 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:46.865534 | orchestrator | 2025-08-29 17:59:46 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:46.865570 | orchestrator | 2025-08-29 17:59:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:49.910524 | orchestrator | 2025-08-29 17:59:49 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:49.912180 | orchestrator | 2025-08-29 17:59:49 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:49.913869 | orchestrator | 2025-08-29 17:59:49 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:49.913880 | orchestrator | 2025-08-29 17:59:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:52.959158 | orchestrator | 2025-08-29 17:59:52 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:52.961877 | orchestrator | 2025-08-29 17:59:52 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:52.963474 | orchestrator | 2025-08-29 17:59:52 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:52.963550 | orchestrator | 2025-08-29 17:59:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:56.016798 | orchestrator | 2025-08-29 17:59:56 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:56.017864 | orchestrator | 2025-08-29 17:59:56 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:56.019970 | orchestrator | 2025-08-29 17:59:56 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:56.020032 | orchestrator | 2025-08-29 17:59:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 17:59:59.062774 | orchestrator | 2025-08-29 17:59:59 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 17:59:59.063993 | orchestrator | 2025-08-29 17:59:59 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 17:59:59.066200 | orchestrator | 2025-08-29 17:59:59 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 17:59:59.066250 | orchestrator | 2025-08-29 17:59:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:02.120018 | orchestrator | 2025-08-29 18:00:02 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:02.122518 | orchestrator | 2025-08-29 18:00:02 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:02.126846 | orchestrator | 2025-08-29 18:00:02 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 18:00:02.126900 | orchestrator | 2025-08-29 18:00:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:05.173781 | orchestrator | 2025-08-29 18:00:05 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:05.175459 | orchestrator | 2025-08-29 18:00:05 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:05.177121 | orchestrator | 2025-08-29 18:00:05 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 18:00:05.177195 | orchestrator | 2025-08-29 18:00:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:08.230218 | orchestrator | 2025-08-29 18:00:08 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:08.231449 | orchestrator | 2025-08-29 18:00:08 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:08.235053 | orchestrator | 2025-08-29 18:00:08 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 18:00:08.235092 | orchestrator | 2025-08-29 18:00:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:11.271342 | orchestrator | 2025-08-29 18:00:11 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:11.272463 | orchestrator | 2025-08-29 18:00:11 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:11.273513 | orchestrator | 2025-08-29 18:00:11 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 18:00:11.273541 | orchestrator | 2025-08-29 18:00:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:14.337594 | orchestrator | 2025-08-29 18:00:14 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:14.339025 | orchestrator | 2025-08-29 18:00:14 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:14.340408 | orchestrator | 2025-08-29 18:00:14 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state STARTED 2025-08-29 18:00:14.340444 | orchestrator | 2025-08-29 18:00:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:17.392825 | orchestrator | 2025-08-29 18:00:17 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:17.395187 | orchestrator | 2025-08-29 18:00:17 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:17.400538 | orchestrator | 2025-08-29 18:00:17.400593 | orchestrator | 2025-08-29 18:00:17.400613 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:00:17.400635 | orchestrator | 2025-08-29 18:00:17.400655 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:00:17.400676 | orchestrator | Friday 29 August 2025 17:57:05 +0000 (0:00:00.348) 0:00:00.348 ********* 2025-08-29 18:00:17.400695 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:17.400717 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:17.400735 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:17.400755 | orchestrator | 2025-08-29 18:00:17.400774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:00:17.400795 | orchestrator | Friday 29 August 2025 17:57:05 +0000 (0:00:00.312) 0:00:00.661 ********* 2025-08-29 18:00:17.400807 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 18:00:17.400819 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 18:00:17.400829 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 18:00:17.400840 | orchestrator | 2025-08-29 18:00:17.400852 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 18:00:17.400862 | orchestrator | 2025-08-29 18:00:17.400873 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 18:00:17.400909 | orchestrator | Friday 29 August 2025 17:57:06 +0000 (0:00:00.443) 0:00:01.104 ********* 2025-08-29 18:00:17.400921 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:00:17.400932 | orchestrator | 2025-08-29 18:00:17.400943 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 18:00:17.400954 | orchestrator | Friday 29 August 2025 17:57:06 +0000 (0:00:00.531) 0:00:01.636 ********* 2025-08-29 18:00:17.400964 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 18:00:17.400975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 18:00:17.400986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 18:00:17.400996 | orchestrator | 2025-08-29 18:00:17.401007 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 18:00:17.401018 | orchestrator | Friday 29 August 2025 17:57:08 +0000 (0:00:01.724) 0:00:03.360 ********* 2025-08-29 18:00:17.401047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401180 | orchestrator | 2025-08-29 18:00:17.401198 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 18:00:17.401216 | orchestrator | Friday 29 August 2025 17:57:10 +0000 (0:00:01.937) 0:00:05.298 ********* 2025-08-29 18:00:17.401234 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:00:17.401251 | orchestrator | 2025-08-29 18:00:17.401320 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 18:00:17.401337 | orchestrator | Friday 29 August 2025 17:57:10 +0000 (0:00:00.534) 0:00:05.833 ********* 2025-08-29 18:00:17.401370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401527 | orchestrator | 2025-08-29 18:00:17.401538 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 18:00:17.401549 | orchestrator | Friday 29 August 2025 17:57:13 +0000 (0:00:02.561) 0:00:08.395 ********* 2025-08-29 18:00:17.401566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 18:00:17.401578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 18:00:17.401597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 18:00:17.401617 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:17.401629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 18:00:17.401641 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:17.401657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 18:00:17.401669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 18:00:17.401681 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:17.401691 | orchestrator | 2025-08-29 18:00:17.401702 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 18:00:17.401713 | orchestrator | Friday 29 August 2025 17:57:14 +0000 (0:00:01.135) 0:00:09.530 ********* 2025-08-29 18:00:17.401733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 18:00:17.401751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 18:00:17.401763 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:17.401779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 18:00:17.401791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 18:00:17.401803 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:17.401820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 18:00:17.401838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 18:00:17.401850 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:17.401861 | orchestrator | 2025-08-29 18:00:17.401871 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 18:00:17.401882 | orchestrator | Friday 29 August 2025 17:57:15 +0000 (0:00:00.970) 0:00:10.501 ********* 2025-08-29 18:00:17.401898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.401955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.401996 | orchestrator | 2025-08-29 18:00:17.402007 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 18:00:17.402085 | orchestrator | Friday 29 August 2025 17:57:17 +0000 (0:00:02.406) 0:00:12.908 ********* 2025-08-29 18:00:17.402100 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:17.402111 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:17.402122 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:17.402132 | orchestrator | 2025-08-29 18:00:17.402143 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 18:00:17.402154 | orchestrator | Friday 29 August 2025 17:57:20 +0000 (0:00:03.102) 0:00:16.010 ********* 2025-08-29 18:00:17.402165 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:17.402175 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:17.402186 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:17.402197 | orchestrator | 2025-08-29 18:00:17.402208 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 18:00:17.402218 | orchestrator | Friday 29 August 2025 17:57:23 +0000 (0:00:02.256) 0:00:18.266 ********* 2025-08-29 18:00:17.402239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/et2025-08-29 18:00:17 | INFO  | Task 1f18ced2-948a-4c77-9b07-845c00462edf is in state SUCCESS 2025-08-29 18:00:17.402252 | orchestrator | 2025-08-29 18:00:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:17.402296 | orchestrator | c/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.402321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.402349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 18:00:17.402365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.402398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.402411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 18:00:17.402423 | orchestrator | 2025-08-29 18:00:17.402434 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 18:00:17.402452 | orchestrator | Friday 29 August 2025 17:57:25 +0000 (0:00:02.135) 0:00:20.402 ********* 2025-08-29 18:00:17.402471 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:17.402490 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:17.402507 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:17.402518 | orchestrator | 2025-08-29 18:00:17.402528 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 18:00:17.402539 | orchestrator | Friday 29 August 2025 17:57:25 +0000 (0:00:00.340) 0:00:20.743 ********* 2025-08-29 18:00:17.402550 | orchestrator | 2025-08-29 18:00:17.402560 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 18:00:17.402571 | orchestrator | Friday 29 August 2025 17:57:25 +0000 (0:00:00.067) 0:00:20.810 ********* 2025-08-29 18:00:17.402582 | orchestrator | 2025-08-29 18:00:17.402617 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 18:00:17.402628 | orchestrator | Friday 29 August 2025 17:57:25 +0000 (0:00:00.070) 0:00:20.880 ********* 2025-08-29 18:00:17.402639 | orchestrator | 2025-08-29 18:00:17.402649 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 18:00:17.402660 | orchestrator | Friday 29 August 2025 17:57:26 +0000 (0:00:00.260) 0:00:21.141 ********* 2025-08-29 18:00:17.402671 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:17.402681 | orchestrator | 2025-08-29 18:00:17.402692 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 18:00:17.402703 | orchestrator | Friday 29 August 2025 17:57:26 +0000 (0:00:00.239) 0:00:21.380 ********* 2025-08-29 18:00:17.402713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:17.402723 | orchestrator | 2025-08-29 18:00:17.402734 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 18:00:17.402745 | orchestrator | Friday 29 August 2025 17:57:26 +0000 (0:00:00.263) 0:00:21.644 ********* 2025-08-29 18:00:17.402755 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:17.402766 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:17.402776 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:17.402787 | orchestrator | 2025-08-29 18:00:17.402798 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 18:00:17.402808 | orchestrator | Friday 29 August 2025 17:58:42 +0000 (0:01:15.849) 0:01:37.494 ********* 2025-08-29 18:00:17.402819 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:17.402829 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:17.402840 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:17.402851 | orchestrator | 2025-08-29 18:00:17.402861 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 18:00:17.402872 | orchestrator | Friday 29 August 2025 18:00:04 +0000 (0:01:21.992) 0:02:59.486 ********* 2025-08-29 18:00:17.402882 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:00:17.402893 | orchestrator | 2025-08-29 18:00:17.402904 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 18:00:17.402915 | orchestrator | Friday 29 August 2025 18:00:05 +0000 (0:00:00.790) 0:03:00.276 ********* 2025-08-29 18:00:17.402925 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:17.402936 | orchestrator | 2025-08-29 18:00:17.402947 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 18:00:17.402957 | orchestrator | Friday 29 August 2025 18:00:07 +0000 (0:00:02.362) 0:03:02.638 ********* 2025-08-29 18:00:17.402968 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:17.402979 | orchestrator | 2025-08-29 18:00:17.402989 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 18:00:17.403000 | orchestrator | Friday 29 August 2025 18:00:09 +0000 (0:00:02.173) 0:03:04.812 ********* 2025-08-29 18:00:17.403011 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:17.403022 | orchestrator | 2025-08-29 18:00:17.403039 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 18:00:17.403050 | orchestrator | Friday 29 August 2025 18:00:12 +0000 (0:00:02.678) 0:03:07.490 ********* 2025-08-29 18:00:17.403061 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:17.403071 | orchestrator | 2025-08-29 18:00:17.403082 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:00:17.403095 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:00:17.403106 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 18:00:17.403117 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 18:00:17.403135 | orchestrator | 2025-08-29 18:00:17.403146 | orchestrator | 2025-08-29 18:00:17.403156 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:00:17.403167 | orchestrator | Friday 29 August 2025 18:00:14 +0000 (0:00:02.269) 0:03:09.759 ********* 2025-08-29 18:00:17.403178 | orchestrator | =============================================================================== 2025-08-29 18:00:17.403188 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.99s 2025-08-29 18:00:17.403199 | orchestrator | opensearch : Restart opensearch container ------------------------------ 75.85s 2025-08-29 18:00:17.403209 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.10s 2025-08-29 18:00:17.403220 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.68s 2025-08-29 18:00:17.403230 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.56s 2025-08-29 18:00:17.403241 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.41s 2025-08-29 18:00:17.403251 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.36s 2025-08-29 18:00:17.403316 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.27s 2025-08-29 18:00:17.403328 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.26s 2025-08-29 18:00:17.403339 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2025-08-29 18:00:17.403350 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.14s 2025-08-29 18:00:17.403360 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.94s 2025-08-29 18:00:17.403371 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.72s 2025-08-29 18:00:17.403388 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.14s 2025-08-29 18:00:17.403398 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.97s 2025-08-29 18:00:17.403409 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.79s 2025-08-29 18:00:17.403420 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-08-29 18:00:17.403430 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-08-29 18:00:17.403441 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-08-29 18:00:17.403452 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.40s 2025-08-29 18:00:20.445099 | orchestrator | 2025-08-29 18:00:20 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:20.445707 | orchestrator | 2025-08-29 18:00:20 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:20.445744 | orchestrator | 2025-08-29 18:00:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:23.491516 | orchestrator | 2025-08-29 18:00:23 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:23.491862 | orchestrator | 2025-08-29 18:00:23 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:23.492088 | orchestrator | 2025-08-29 18:00:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:26.536283 | orchestrator | 2025-08-29 18:00:26 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:26.538582 | orchestrator | 2025-08-29 18:00:26 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:26.538622 | orchestrator | 2025-08-29 18:00:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:29.592936 | orchestrator | 2025-08-29 18:00:29 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:29.594667 | orchestrator | 2025-08-29 18:00:29 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:29.594709 | orchestrator | 2025-08-29 18:00:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:32.635201 | orchestrator | 2025-08-29 18:00:32 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:32.637471 | orchestrator | 2025-08-29 18:00:32 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:32.637504 | orchestrator | 2025-08-29 18:00:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:35.704240 | orchestrator | 2025-08-29 18:00:35 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:35.704343 | orchestrator | 2025-08-29 18:00:35 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state STARTED 2025-08-29 18:00:35.704357 | orchestrator | 2025-08-29 18:00:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:38.761420 | orchestrator | 2025-08-29 18:00:38 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:38.762995 | orchestrator | 2025-08-29 18:00:38 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:38.768024 | orchestrator | 2025-08-29 18:00:38 | INFO  | Task b04828dd-e76c-41a1-975e-0b621d650e37 is in state SUCCESS 2025-08-29 18:00:38.771875 | orchestrator | 2025-08-29 18:00:38.771963 | orchestrator | 2025-08-29 18:00:38.771977 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 18:00:38.771989 | orchestrator | 2025-08-29 18:00:38.772010 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 18:00:38.772020 | orchestrator | Friday 29 August 2025 17:57:05 +0000 (0:00:00.131) 0:00:00.131 ********* 2025-08-29 18:00:38.772043 | orchestrator | ok: [localhost] => { 2025-08-29 18:00:38.772066 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 18:00:38.772088 | orchestrator | } 2025-08-29 18:00:38.772098 | orchestrator | 2025-08-29 18:00:38.772119 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 18:00:38.772139 | orchestrator | Friday 29 August 2025 17:57:05 +0000 (0:00:00.050) 0:00:00.181 ********* 2025-08-29 18:00:38.772150 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 18:00:38.772161 | orchestrator | ...ignoring 2025-08-29 18:00:38.772171 | orchestrator | 2025-08-29 18:00:38.772191 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 18:00:38.772227 | orchestrator | Friday 29 August 2025 17:57:08 +0000 (0:00:02.990) 0:00:03.172 ********* 2025-08-29 18:00:38.772295 | orchestrator | skipping: [localhost] 2025-08-29 18:00:38.772307 | orchestrator | 2025-08-29 18:00:38.772386 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 18:00:38.772420 | orchestrator | Friday 29 August 2025 17:57:08 +0000 (0:00:00.060) 0:00:03.232 ********* 2025-08-29 18:00:38.772439 | orchestrator | ok: [localhost] 2025-08-29 18:00:38.772458 | orchestrator | 2025-08-29 18:00:38.772476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:00:38.772494 | orchestrator | 2025-08-29 18:00:38.772528 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:00:38.772547 | orchestrator | Friday 29 August 2025 17:57:08 +0000 (0:00:00.164) 0:00:03.396 ********* 2025-08-29 18:00:38.772566 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.772583 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.772602 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.772618 | orchestrator | 2025-08-29 18:00:38.772635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:00:38.772654 | orchestrator | Friday 29 August 2025 17:57:08 +0000 (0:00:00.400) 0:00:03.797 ********* 2025-08-29 18:00:38.772695 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 18:00:38.772716 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 18:00:38.772733 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 18:00:38.772752 | orchestrator | 2025-08-29 18:00:38.772772 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 18:00:38.772789 | orchestrator | 2025-08-29 18:00:38.772808 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 18:00:38.772826 | orchestrator | Friday 29 August 2025 17:57:09 +0000 (0:00:00.868) 0:00:04.665 ********* 2025-08-29 18:00:38.772840 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 18:00:38.772850 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 18:00:38.772859 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 18:00:38.772869 | orchestrator | 2025-08-29 18:00:38.772878 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 18:00:38.772888 | orchestrator | Friday 29 August 2025 17:57:10 +0000 (0:00:00.443) 0:00:05.109 ********* 2025-08-29 18:00:38.772897 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:00:38.772907 | orchestrator | 2025-08-29 18:00:38.772916 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 18:00:38.772926 | orchestrator | Friday 29 August 2025 17:57:10 +0000 (0:00:00.597) 0:00:05.707 ********* 2025-08-29 18:00:38.772961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.772992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.773076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.773097 | orchestrator | 2025-08-29 18:00:38.773124 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 18:00:38.773140 | orchestrator | Friday 29 August 2025 17:57:13 +0000 (0:00:03.032) 0:00:08.739 ********* 2025-08-29 18:00:38.773177 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.773195 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.773213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.773308 | orchestrator | 2025-08-29 18:00:38.773607 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 18:00:38.773631 | orchestrator | Friday 29 August 2025 17:57:14 +0000 (0:00:00.732) 0:00:09.472 ********* 2025-08-29 18:00:38.773648 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.773659 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.773668 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.773678 | orchestrator | 2025-08-29 18:00:38.773687 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 18:00:38.773710 | orchestrator | Friday 29 August 2025 17:57:16 +0000 (0:00:01.556) 0:00:11.028 ********* 2025-08-29 18:00:38.773725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.773747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.773760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.773774 | orchestrator | 2025-08-29 18:00:38.773782 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 18:00:38.773790 | orchestrator | Friday 29 August 2025 17:57:19 +0000 (0:00:03.471) 0:00:14.499 ********* 2025-08-29 18:00:38.773798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.773806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.773814 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.773822 | orchestrator | 2025-08-29 18:00:38.773830 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 18:00:38.773838 | orchestrator | Friday 29 August 2025 17:57:20 +0000 (0:00:01.231) 0:00:15.731 ********* 2025-08-29 18:00:38.773845 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.773853 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:38.773861 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:38.773869 | orchestrator | 2025-08-29 18:00:38.773876 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 18:00:38.773884 | orchestrator | Friday 29 August 2025 17:57:25 +0000 (0:00:04.918) 0:00:20.649 ********* 2025-08-29 18:00:38.773892 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:00:38.773900 | orchestrator | 2025-08-29 18:00:38.773908 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 18:00:38.773915 | orchestrator | Friday 29 August 2025 17:57:26 +0000 (0:00:00.738) 0:00:21.388 ********* 2025-08-29 18:00:38.773931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.773945 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.773957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.773966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.773992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774007 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.774043 | orchestrator | 2025-08-29 18:00:38.774051 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 18:00:38.774059 | orchestrator | Friday 29 August 2025 17:57:31 +0000 (0:00:04.569) 0:00:25.957 ********* 2025-08-29 18:00:38.774071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.774095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774118 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.774136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774153 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.774166 | orchestrator | 2025-08-29 18:00:38.774179 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 18:00:38.774193 | orchestrator | Friday 29 August 2025 17:57:34 +0000 (0:00:03.035) 0:00:28.993 ********* 2025-08-29 18:00:38.774216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.774316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774326 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.774343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 18:00:38.774367 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.774375 | orchestrator | 2025-08-29 18:00:38.774383 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 18:00:38.774401 | orchestrator | Friday 29 August 2025 17:57:37 +0000 (0:00:03.200) 0:00:32.193 ********* 2025-08-29 18:00:38.774438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.774450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.774492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 18:00:38.774511 | orchestrator | 2025-08-29 18:00:38.774520 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 18:00:38.774549 | orchestrator | Friday 29 August 2025 17:57:41 +0000 (0:00:03.831) 0:00:36.025 ********* 2025-08-29 18:00:38.774558 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.774566 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:38.774574 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:38.774582 | orchestrator | 2025-08-29 18:00:38.774590 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 18:00:38.774598 | orchestrator | Friday 29 August 2025 17:57:42 +0000 (0:00:01.160) 0:00:37.186 ********* 2025-08-29 18:00:38.774606 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.774614 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.774622 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.774630 | orchestrator | 2025-08-29 18:00:38.774638 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 18:00:38.774646 | orchestrator | Friday 29 August 2025 17:57:42 +0000 (0:00:00.452) 0:00:37.638 ********* 2025-08-29 18:00:38.774654 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.774662 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.774670 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.774687 | orchestrator | 2025-08-29 18:00:38.774704 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 18:00:38.774713 | orchestrator | Friday 29 August 2025 17:57:43 +0000 (0:00:00.345) 0:00:37.983 ********* 2025-08-29 18:00:38.774730 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 18:00:38.774739 | orchestrator | ...ignoring 2025-08-29 18:00:38.774761 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 18:00:38.774770 | orchestrator | ...ignoring 2025-08-29 18:00:38.774787 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 18:00:38.774796 | orchestrator | ...ignoring 2025-08-29 18:00:38.774813 | orchestrator | 2025-08-29 18:00:38.774821 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 18:00:38.774838 | orchestrator | Friday 29 August 2025 17:57:53 +0000 (0:00:10.935) 0:00:48.919 ********* 2025-08-29 18:00:38.774855 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.774863 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.774880 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.774888 | orchestrator | 2025-08-29 18:00:38.774935 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 18:00:38.774944 | orchestrator | Friday 29 August 2025 17:57:55 +0000 (0:00:01.157) 0:00:50.076 ********* 2025-08-29 18:00:38.774952 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.774960 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.774968 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.774976 | orchestrator | 2025-08-29 18:00:38.774984 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 18:00:38.774991 | orchestrator | Friday 29 August 2025 17:57:55 +0000 (0:00:00.581) 0:00:50.658 ********* 2025-08-29 18:00:38.774999 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.775007 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.775015 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.775023 | orchestrator | 2025-08-29 18:00:38.775031 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 18:00:38.775049 | orchestrator | Friday 29 August 2025 17:57:56 +0000 (0:00:00.503) 0:00:51.162 ********* 2025-08-29 18:00:38.775057 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.775074 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.775082 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.775099 | orchestrator | 2025-08-29 18:00:38.775108 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 18:00:38.775130 | orchestrator | Friday 29 August 2025 17:57:56 +0000 (0:00:00.450) 0:00:51.612 ********* 2025-08-29 18:00:38.775156 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.775183 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.775211 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.775232 | orchestrator | 2025-08-29 18:00:38.775240 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 18:00:38.775277 | orchestrator | Friday 29 August 2025 17:57:57 +0000 (0:00:00.952) 0:00:52.565 ********* 2025-08-29 18:00:38.775287 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.775295 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.775312 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.775320 | orchestrator | 2025-08-29 18:00:38.775337 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 18:00:38.775345 | orchestrator | Friday 29 August 2025 17:57:58 +0000 (0:00:00.456) 0:00:53.022 ********* 2025-08-29 18:00:38.775353 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.775361 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.775368 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 18:00:38.775376 | orchestrator | 2025-08-29 18:00:38.775384 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 18:00:38.775401 | orchestrator | Friday 29 August 2025 17:57:58 +0000 (0:00:00.410) 0:00:53.432 ********* 2025-08-29 18:00:38.775418 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.775435 | orchestrator | 2025-08-29 18:00:38.775443 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 18:00:38.775477 | orchestrator | Friday 29 August 2025 17:58:18 +0000 (0:00:20.229) 0:01:13.662 ********* 2025-08-29 18:00:38.775485 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.775531 | orchestrator | 2025-08-29 18:00:38.775544 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 18:00:38.775552 | orchestrator | Friday 29 August 2025 17:58:18 +0000 (0:00:00.144) 0:01:13.807 ********* 2025-08-29 18:00:38.775570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.775578 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.775595 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.775603 | orchestrator | 2025-08-29 18:00:38.775620 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 18:00:38.775628 | orchestrator | Friday 29 August 2025 17:58:19 +0000 (0:00:01.096) 0:01:14.903 ********* 2025-08-29 18:00:38.775646 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.775654 | orchestrator | 2025-08-29 18:00:38.775697 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 18:00:38.775715 | orchestrator | Friday 29 August 2025 17:58:28 +0000 (0:00:08.543) 0:01:23.447 ********* 2025-08-29 18:00:38.775723 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.775740 | orchestrator | 2025-08-29 18:00:38.775748 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 18:00:38.775765 | orchestrator | Friday 29 August 2025 17:58:30 +0000 (0:00:01.591) 0:01:25.038 ********* 2025-08-29 18:00:38.775773 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.775792 | orchestrator | 2025-08-29 18:00:38.775810 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 18:00:38.775827 | orchestrator | Friday 29 August 2025 17:58:32 +0000 (0:00:02.646) 0:01:27.685 ********* 2025-08-29 18:00:38.775835 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.775852 | orchestrator | 2025-08-29 18:00:38.775860 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 18:00:38.775879 | orchestrator | Friday 29 August 2025 17:58:32 +0000 (0:00:00.142) 0:01:27.827 ********* 2025-08-29 18:00:38.775896 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.775904 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.775921 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.775930 | orchestrator | 2025-08-29 18:00:38.775946 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 18:00:38.775954 | orchestrator | Friday 29 August 2025 17:58:33 +0000 (0:00:00.572) 0:01:28.399 ********* 2025-08-29 18:00:38.775972 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.775988 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 18:00:38.775997 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:38.776004 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:38.776021 | orchestrator | 2025-08-29 18:00:38.776029 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 18:00:38.776046 | orchestrator | skipping: no hosts matched 2025-08-29 18:00:38.776054 | orchestrator | 2025-08-29 18:00:38.776072 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 18:00:38.776089 | orchestrator | 2025-08-29 18:00:38.776097 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 18:00:38.776114 | orchestrator | Friday 29 August 2025 17:58:33 +0000 (0:00:00.356) 0:01:28.755 ********* 2025-08-29 18:00:38.776122 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:00:38.776142 | orchestrator | 2025-08-29 18:00:38.776181 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 18:00:38.776197 | orchestrator | Friday 29 August 2025 17:58:59 +0000 (0:00:25.713) 0:01:54.469 ********* 2025-08-29 18:00:38.776210 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.776218 | orchestrator | 2025-08-29 18:00:38.776226 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 18:00:38.776233 | orchestrator | Friday 29 August 2025 17:59:15 +0000 (0:00:15.589) 0:02:10.058 ********* 2025-08-29 18:00:38.776247 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.776255 | orchestrator | 2025-08-29 18:00:38.776306 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 18:00:38.776315 | orchestrator | 2025-08-29 18:00:38.776323 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 18:00:38.776331 | orchestrator | Friday 29 August 2025 17:59:17 +0000 (0:00:02.791) 0:02:12.850 ********* 2025-08-29 18:00:38.776339 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:00:38.776346 | orchestrator | 2025-08-29 18:00:38.776354 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 18:00:38.776369 | orchestrator | Friday 29 August 2025 17:59:39 +0000 (0:00:21.607) 0:02:34.457 ********* 2025-08-29 18:00:38.776376 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.776382 | orchestrator | 2025-08-29 18:00:38.776389 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 18:00:38.776395 | orchestrator | Friday 29 August 2025 18:00:00 +0000 (0:00:20.618) 0:02:55.076 ********* 2025-08-29 18:00:38.776402 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.776409 | orchestrator | 2025-08-29 18:00:38.776415 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 18:00:38.776422 | orchestrator | 2025-08-29 18:00:38.776428 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 18:00:38.776435 | orchestrator | Friday 29 August 2025 18:00:03 +0000 (0:00:02.946) 0:02:58.022 ********* 2025-08-29 18:00:38.776442 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.776448 | orchestrator | 2025-08-29 18:00:38.776455 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 18:00:38.776461 | orchestrator | Friday 29 August 2025 18:00:16 +0000 (0:00:13.003) 0:03:11.025 ********* 2025-08-29 18:00:38.776468 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.776474 | orchestrator | 2025-08-29 18:00:38.776481 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 18:00:38.776488 | orchestrator | Friday 29 August 2025 18:00:21 +0000 (0:00:05.607) 0:03:16.633 ********* 2025-08-29 18:00:38.776494 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.776501 | orchestrator | 2025-08-29 18:00:38.776508 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 18:00:38.776514 | orchestrator | 2025-08-29 18:00:38.776521 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 18:00:38.776527 | orchestrator | Friday 29 August 2025 18:00:24 +0000 (0:00:02.619) 0:03:19.253 ********* 2025-08-29 18:00:38.776538 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:00:38.776544 | orchestrator | 2025-08-29 18:00:38.776551 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 18:00:38.776558 | orchestrator | Friday 29 August 2025 18:00:24 +0000 (0:00:00.547) 0:03:19.801 ********* 2025-08-29 18:00:38.776564 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.776579 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.776592 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.776598 | orchestrator | 2025-08-29 18:00:38.776605 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 18:00:38.776611 | orchestrator | Friday 29 August 2025 18:00:27 +0000 (0:00:02.374) 0:03:22.175 ********* 2025-08-29 18:00:38.776618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.776625 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.776631 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.776638 | orchestrator | 2025-08-29 18:00:38.776645 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 18:00:38.776651 | orchestrator | Friday 29 August 2025 18:00:29 +0000 (0:00:02.012) 0:03:24.188 ********* 2025-08-29 18:00:38.776658 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.776665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.776671 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.776682 | orchestrator | 2025-08-29 18:00:38.776689 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 18:00:38.776696 | orchestrator | Friday 29 August 2025 18:00:31 +0000 (0:00:01.983) 0:03:26.172 ********* 2025-08-29 18:00:38.776702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.776709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.776715 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:00:38.776722 | orchestrator | 2025-08-29 18:00:38.776729 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 18:00:38.776735 | orchestrator | Friday 29 August 2025 18:00:33 +0000 (0:00:01.993) 0:03:28.165 ********* 2025-08-29 18:00:38.776742 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:00:38.776749 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:00:38.776755 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:00:38.776762 | orchestrator | 2025-08-29 18:00:38.776769 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 18:00:38.776775 | orchestrator | Friday 29 August 2025 18:00:36 +0000 (0:00:03.185) 0:03:31.351 ********* 2025-08-29 18:00:38.776782 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:00:38.776788 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:00:38.776795 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:00:38.776802 | orchestrator | 2025-08-29 18:00:38.776808 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:00:38.776815 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 18:00:38.776822 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 18:00:38.776830 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 18:00:38.776837 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 18:00:38.776843 | orchestrator | 2025-08-29 18:00:38.776850 | orchestrator | 2025-08-29 18:00:38.776856 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:00:38.776863 | orchestrator | Friday 29 August 2025 18:00:36 +0000 (0:00:00.246) 0:03:31.598 ********* 2025-08-29 18:00:38.776870 | orchestrator | =============================================================================== 2025-08-29 18:00:38.776876 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 47.32s 2025-08-29 18:00:38.776883 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.21s 2025-08-29 18:00:38.776902 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 20.23s 2025-08-29 18:00:38.776910 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.00s 2025-08-29 18:00:38.776924 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.94s 2025-08-29 18:00:38.776931 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.54s 2025-08-29 18:00:38.776937 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.74s 2025-08-29 18:00:38.776944 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.61s 2025-08-29 18:00:38.776950 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.92s 2025-08-29 18:00:38.776957 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.57s 2025-08-29 18:00:38.776963 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.83s 2025-08-29 18:00:38.776970 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.47s 2025-08-29 18:00:38.776977 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.20s 2025-08-29 18:00:38.776987 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.19s 2025-08-29 18:00:38.776994 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.04s 2025-08-29 18:00:38.777001 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.03s 2025-08-29 18:00:38.777007 | orchestrator | Check MariaDB service --------------------------------------------------- 2.99s 2025-08-29 18:00:38.777017 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.65s 2025-08-29 18:00:38.777024 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.62s 2025-08-29 18:00:38.777031 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2025-08-29 18:00:38.777038 | orchestrator | 2025-08-29 18:00:38 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:38.777044 | orchestrator | 2025-08-29 18:00:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:41.826166 | orchestrator | 2025-08-29 18:00:41 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:41.827418 | orchestrator | 2025-08-29 18:00:41 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:41.828995 | orchestrator | 2025-08-29 18:00:41 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:41.829206 | orchestrator | 2025-08-29 18:00:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:44.872380 | orchestrator | 2025-08-29 18:00:44 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:44.873925 | orchestrator | 2025-08-29 18:00:44 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:44.876918 | orchestrator | 2025-08-29 18:00:44 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:44.876967 | orchestrator | 2025-08-29 18:00:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:47.929853 | orchestrator | 2025-08-29 18:00:47 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:47.929954 | orchestrator | 2025-08-29 18:00:47 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:47.930852 | orchestrator | 2025-08-29 18:00:47 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:47.931071 | orchestrator | 2025-08-29 18:00:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:50.971241 | orchestrator | 2025-08-29 18:00:50 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:50.971433 | orchestrator | 2025-08-29 18:00:50 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:50.972388 | orchestrator | 2025-08-29 18:00:50 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:50.972411 | orchestrator | 2025-08-29 18:00:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:54.054613 | orchestrator | 2025-08-29 18:00:54 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:54.055426 | orchestrator | 2025-08-29 18:00:54 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:54.056444 | orchestrator | 2025-08-29 18:00:54 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:54.056959 | orchestrator | 2025-08-29 18:00:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:00:57.105158 | orchestrator | 2025-08-29 18:00:57 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:00:57.105635 | orchestrator | 2025-08-29 18:00:57 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:00:57.106304 | orchestrator | 2025-08-29 18:00:57 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:00:57.106352 | orchestrator | 2025-08-29 18:00:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:00.154503 | orchestrator | 2025-08-29 18:01:00 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:01:00.154756 | orchestrator | 2025-08-29 18:01:00 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:00.155611 | orchestrator | 2025-08-29 18:01:00 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:00.155643 | orchestrator | 2025-08-29 18:01:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:03.197203 | orchestrator | 2025-08-29 18:01:03 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:01:03.198005 | orchestrator | 2025-08-29 18:01:03 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:03.200443 | orchestrator | 2025-08-29 18:01:03 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:03.200754 | orchestrator | 2025-08-29 18:01:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:06.251495 | orchestrator | 2025-08-29 18:01:06 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state STARTED 2025-08-29 18:01:06.251579 | orchestrator | 2025-08-29 18:01:06 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:06.252364 | orchestrator | 2025-08-29 18:01:06 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:06.252515 | orchestrator | 2025-08-29 18:01:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:09.289505 | orchestrator | 2025-08-29 18:01:09 | INFO  | Task d82c7156-0224-4ead-b0b4-4c22808273fb is in state SUCCESS 2025-08-29 18:01:09.292030 | orchestrator | 2025-08-29 18:01:09.292075 | orchestrator | 2025-08-29 18:01:09.292089 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 18:01:09.292100 | orchestrator | 2025-08-29 18:01:09.292112 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 18:01:09.292122 | orchestrator | Friday 29 August 2025 17:59:00 +0000 (0:00:00.666) 0:00:00.666 ********* 2025-08-29 18:01:09.292133 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:01:09.292145 | orchestrator | 2025-08-29 18:01:09.292155 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 18:01:09.292165 | orchestrator | Friday 29 August 2025 17:59:01 +0000 (0:00:00.713) 0:00:01.379 ********* 2025-08-29 18:01:09.292175 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.292187 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.292197 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.292207 | orchestrator | 2025-08-29 18:01:09.292217 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 18:01:09.292227 | orchestrator | Friday 29 August 2025 17:59:01 +0000 (0:00:00.746) 0:00:02.126 ********* 2025-08-29 18:01:09.292237 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.292247 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.292257 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.292327 | orchestrator | 2025-08-29 18:01:09.292337 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 18:01:09.292347 | orchestrator | Friday 29 August 2025 17:59:02 +0000 (0:00:00.321) 0:00:02.447 ********* 2025-08-29 18:01:09.292356 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.292366 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.292376 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.292412 | orchestrator | 2025-08-29 18:01:09.292422 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 18:01:09.292432 | orchestrator | Friday 29 August 2025 17:59:03 +0000 (0:00:00.860) 0:00:03.308 ********* 2025-08-29 18:01:09.292441 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.292451 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.292461 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.292470 | orchestrator | 2025-08-29 18:01:09.292479 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 18:01:09.292489 | orchestrator | Friday 29 August 2025 17:59:03 +0000 (0:00:00.335) 0:00:03.643 ********* 2025-08-29 18:01:09.292498 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.292507 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.292517 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.292526 | orchestrator | 2025-08-29 18:01:09.292536 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 18:01:09.292545 | orchestrator | Friday 29 August 2025 17:59:03 +0000 (0:00:00.319) 0:00:03.963 ********* 2025-08-29 18:01:09.292554 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.292690 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.292769 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.292784 | orchestrator | 2025-08-29 18:01:09.292795 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 18:01:09.292807 | orchestrator | Friday 29 August 2025 17:59:04 +0000 (0:00:00.328) 0:00:04.292 ********* 2025-08-29 18:01:09.292819 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.292831 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.292843 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.293123 | orchestrator | 2025-08-29 18:01:09.293139 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 18:01:09.293148 | orchestrator | Friday 29 August 2025 17:59:04 +0000 (0:00:00.520) 0:00:04.812 ********* 2025-08-29 18:01:09.293158 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.293167 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.293177 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.293187 | orchestrator | 2025-08-29 18:01:09.293197 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 18:01:09.293206 | orchestrator | Friday 29 August 2025 17:59:04 +0000 (0:00:00.316) 0:00:05.129 ********* 2025-08-29 18:01:09.293216 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 18:01:09.293226 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 18:01:09.293235 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 18:01:09.293245 | orchestrator | 2025-08-29 18:01:09.293254 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 18:01:09.293283 | orchestrator | Friday 29 August 2025 17:59:05 +0000 (0:00:00.728) 0:00:05.858 ********* 2025-08-29 18:01:09.293293 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.293303 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.293312 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.293322 | orchestrator | 2025-08-29 18:01:09.293331 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 18:01:09.293341 | orchestrator | Friday 29 August 2025 17:59:06 +0000 (0:00:00.428) 0:00:06.286 ********* 2025-08-29 18:01:09.293350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 18:01:09.293360 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 18:01:09.293547 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 18:01:09.293558 | orchestrator | 2025-08-29 18:01:09.293583 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 18:01:09.293593 | orchestrator | Friday 29 August 2025 17:59:08 +0000 (0:00:02.215) 0:00:08.501 ********* 2025-08-29 18:01:09.293613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 18:01:09.293623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 18:01:09.293633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 18:01:09.293642 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.293652 | orchestrator | 2025-08-29 18:01:09.293662 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 18:01:09.293705 | orchestrator | Friday 29 August 2025 17:59:08 +0000 (0:00:00.417) 0:00:08.919 ********* 2025-08-29 18:01:09.293718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.293731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.293741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.293751 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.293761 | orchestrator | 2025-08-29 18:01:09.293770 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 18:01:09.293780 | orchestrator | Friday 29 August 2025 17:59:09 +0000 (0:00:00.921) 0:00:09.841 ********* 2025-08-29 18:01:09.293791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.293803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.293814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.293823 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.293833 | orchestrator | 2025-08-29 18:01:09.293843 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 18:01:09.293853 | orchestrator | Friday 29 August 2025 17:59:09 +0000 (0:00:00.154) 0:00:09.996 ********* 2025-08-29 18:01:09.293865 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '37d3d31bc1d1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 17:59:06.764093', 'end': '2025-08-29 17:59:06.811332', 'delta': '0:00:00.047239', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['37d3d31bc1d1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 18:01:09.293890 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '067c2ffe8f41', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 17:59:07.552890', 'end': '2025-08-29 17:59:07.596468', 'delta': '0:00:00.043578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['067c2ffe8f41'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 18:01:09.293926 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e47054f05e4a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 17:59:08.094787', 'end': '2025-08-29 17:59:08.131149', 'delta': '0:00:00.036362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e47054f05e4a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 18:01:09.293938 | orchestrator | 2025-08-29 18:01:09.293947 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 18:01:09.293957 | orchestrator | Friday 29 August 2025 17:59:10 +0000 (0:00:00.420) 0:00:10.416 ********* 2025-08-29 18:01:09.293967 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.293976 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.293986 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.293995 | orchestrator | 2025-08-29 18:01:09.294005 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 18:01:09.294053 | orchestrator | Friday 29 August 2025 17:59:10 +0000 (0:00:00.467) 0:00:10.883 ********* 2025-08-29 18:01:09.294066 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 18:01:09.294077 | orchestrator | 2025-08-29 18:01:09.294086 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 18:01:09.294096 | orchestrator | Friday 29 August 2025 17:59:12 +0000 (0:00:01.642) 0:00:12.525 ********* 2025-08-29 18:01:09.294106 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294115 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294125 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294134 | orchestrator | 2025-08-29 18:01:09.294144 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 18:01:09.294154 | orchestrator | Friday 29 August 2025 17:59:12 +0000 (0:00:00.316) 0:00:12.842 ********* 2025-08-29 18:01:09.294163 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294173 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294183 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294192 | orchestrator | 2025-08-29 18:01:09.294202 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 18:01:09.294212 | orchestrator | Friday 29 August 2025 17:59:13 +0000 (0:00:00.428) 0:00:13.270 ********* 2025-08-29 18:01:09.294221 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294231 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294241 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294335 | orchestrator | 2025-08-29 18:01:09.294349 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 18:01:09.294359 | orchestrator | Friday 29 August 2025 17:59:13 +0000 (0:00:00.524) 0:00:13.794 ********* 2025-08-29 18:01:09.294369 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.294387 | orchestrator | 2025-08-29 18:01:09.294397 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 18:01:09.294407 | orchestrator | Friday 29 August 2025 17:59:13 +0000 (0:00:00.138) 0:00:13.933 ********* 2025-08-29 18:01:09.294416 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294426 | orchestrator | 2025-08-29 18:01:09.294435 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 18:01:09.294445 | orchestrator | Friday 29 August 2025 17:59:13 +0000 (0:00:00.261) 0:00:14.195 ********* 2025-08-29 18:01:09.294454 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294474 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294483 | orchestrator | 2025-08-29 18:01:09.294493 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 18:01:09.294502 | orchestrator | Friday 29 August 2025 17:59:14 +0000 (0:00:00.324) 0:00:14.520 ********* 2025-08-29 18:01:09.294512 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294521 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294540 | orchestrator | 2025-08-29 18:01:09.294550 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 18:01:09.294559 | orchestrator | Friday 29 August 2025 17:59:14 +0000 (0:00:00.382) 0:00:14.902 ********* 2025-08-29 18:01:09.294569 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294578 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294588 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294597 | orchestrator | 2025-08-29 18:01:09.294607 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 18:01:09.294616 | orchestrator | Friday 29 August 2025 17:59:15 +0000 (0:00:00.538) 0:00:15.441 ********* 2025-08-29 18:01:09.294626 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294635 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294644 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294654 | orchestrator | 2025-08-29 18:01:09.294663 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 18:01:09.294679 | orchestrator | Friday 29 August 2025 17:59:15 +0000 (0:00:00.360) 0:00:15.802 ********* 2025-08-29 18:01:09.294689 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294698 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294708 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294717 | orchestrator | 2025-08-29 18:01:09.294727 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 18:01:09.294736 | orchestrator | Friday 29 August 2025 17:59:15 +0000 (0:00:00.338) 0:00:16.140 ********* 2025-08-29 18:01:09.294746 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294755 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294765 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294774 | orchestrator | 2025-08-29 18:01:09.294784 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 18:01:09.294826 | orchestrator | Friday 29 August 2025 17:59:16 +0000 (0:00:00.356) 0:00:16.496 ********* 2025-08-29 18:01:09.294838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.294848 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.294857 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.294867 | orchestrator | 2025-08-29 18:01:09.294876 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 18:01:09.294886 | orchestrator | Friday 29 August 2025 17:59:16 +0000 (0:00:00.535) 0:00:17.032 ********* 2025-08-29 18:01:09.294897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd', 'dm-uuid-LVM-OqDG69t2vDaMZOSVNYzQsHcamcItuLTl1BlHeYkcX7dm3chbRI1wtvAKHp0WLUD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.294915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca', 'dm-uuid-LVM-GeQfWNL5PTOhGNNfRWS0IbIydprklRI12ZL8udWoflwZgPkVZQjQdRuNlD9nJ5hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.294926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.294936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.294946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.294956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.294971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129', 'dm-uuid-LVM-ULB2gWLlz2AdGy8HiFWlMZDHhaZvCU06Fl3MfVfjSLpPZ9EuBrU7lFdIGZEopowg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jq6FGk-VWre-Zblz-qi7M-NFUu-gpED-HalvBt', 'scsi-0QEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae', 'scsi-SQEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2', 'dm-uuid-LVM-u9knlHc70OesxONFTpvJpQrQMt493OzUXELOIcRt1U0MLaOUw5bOgGqcmYktu9JG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xy292o-a1aF-88n0-5PuI-v5n4-SSXU-DAhGVS', 'scsi-0QEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff', 'scsi-SQEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217', 'scsi-SQEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295196 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.295234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12', 'dm-uuid-LVM-fcIw1H3lu8i6pMvymK1dZFlPy4lkQ9ZNvGa0GU49Ovc7OUkZWQpmdJqvrMMIZdlM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-euxw4P-W6xv-792Y-3K4Q-DM05-27QV-XtcBBi', 'scsi-0QEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f', 'scsi-SQEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4', 'dm-uuid-LVM-ZPM3oy9rFQdt2qS5meKQf8Sb5LM8gmm9dE2KuxJJLUNJZ3q9zc5er2Wc2d9c9yBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lDIUPJ-Cz7F-HF3P-wwdB-p9MW-Kng2-2lXYh8', 'scsi-0QEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87', 'scsi-SQEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e', 'scsi-SQEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295463 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.295473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 18:01:09.295535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iS2ciG-R3is-3hmZ-FLZL-azvV-f5Rp-K1AJgY', 'scsi-0QEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c', 'scsi-SQEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oUaaae-JHpJ-FipB-6T3E-vLiy-UwcG-Zeom8E', 'scsi-0QEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527', 'scsi-SQEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0', 'scsi-SQEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 18:01:09.295701 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.295711 | orchestrator | 2025-08-29 18:01:09.295721 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 18:01:09.295730 | orchestrator | Friday 29 August 2025 17:59:17 +0000 (0:00:00.625) 0:00:17.657 ********* 2025-08-29 18:01:09.295741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd', 'dm-uuid-LVM-OqDG69t2vDaMZOSVNYzQsHcamcItuLTl1BlHeYkcX7dm3chbRI1wtvAKHp0WLUD2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca', 'dm-uuid-LVM-GeQfWNL5PTOhGNNfRWS0IbIydprklRI12ZL8udWoflwZgPkVZQjQdRuNlD9nJ5hY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129', 'dm-uuid-LVM-ULB2gWLlz2AdGy8HiFWlMZDHhaZvCU06Fl3MfVfjSLpPZ9EuBrU7lFdIGZEopowg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2', 'dm-uuid-LVM-u9knlHc70OesxONFTpvJpQrQMt493OzUXELOIcRt1U0MLaOUw5bOgGqcmYktu9JG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295960 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f31d3c31-ee4b-483f-a3c2-6492dae07e0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.295978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--76bb4758--fd8e--569b--82df--4997dbff6ccd-osd--block--76bb4758--fd8e--569b--82df--4997dbff6ccd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jq6FGk-VWre-Zblz-qi7M-NFUu-gpED-HalvBt', 'scsi-0QEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae', 'scsi-SQEMU_QEMU_HARDDISK_20300dc2-4158-438d-b195-18b8d76d00ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296004 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ab048149--1b6d--515a--8df0--d9a146565eca-osd--block--ab048149--1b6d--515a--8df0--d9a146565eca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xy292o-a1aF-88n0-5PuI-v5n4-SSXU-DAhGVS', 'scsi-0QEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff', 'scsi-SQEMU_QEMU_HARDDISK_57070356-ca6b-46ac-b3ca-d106a6094fff'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217', 'scsi-SQEMU_QEMU_HARDDISK_09270e93-6558-41e1-b148-ad056c65a217'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296212 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296243 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296299 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.296324 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec837cdc-1e29-4e10-9703-468e978b2daa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129-osd--block--7e0f67bb--93ba--55c2--b7d3--c3a17e91e129'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-euxw4P-W6xv-792Y-3K4Q-DM05-27QV-XtcBBi', 'scsi-0QEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f', 'scsi-SQEMU_QEMU_HARDDISK_8cf5a937-7553-474f-9654-82589e52b79f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--90167df7--514b--5586--921e--4d7a2964fdd2-osd--block--90167df7--514b--5586--921e--4d7a2964fdd2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lDIUPJ-Cz7F-HF3P-wwdB-p9MW-Kng2-2lXYh8', 'scsi-0QEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87', 'scsi-SQEMU_QEMU_HARDDISK_5cc89214-04a9-4a5a-ac59-f5bd895bbd87'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e', 'scsi-SQEMU_QEMU_HARDDISK_370f8e9e-996a-4d39-adb3-26d918a9c02e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296395 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.296406 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12', 'dm-uuid-LVM-fcIw1H3lu8i6pMvymK1dZFlPy4lkQ9ZNvGa0GU49Ovc7OUkZWQpmdJqvrMMIZdlM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4', 'dm-uuid-LVM-ZPM3oy9rFQdt2qS5meKQf8Sb5LM8gmm9dE2KuxJJLUNJZ3q9zc5er2Wc2d9c9yBo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296504 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16', 'scsi-SQEMU_QEMU_HARDDISK_1d77be04-615d-47f2-877f-564a4cbf903e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1b4aa328--f83b--56f5--ada4--b8257b659e12-osd--block--1b4aa328--f83b--56f5--ada4--b8257b659e12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iS2ciG-R3is-3hmZ-FLZL-azvV-f5Rp-K1AJgY', 'scsi-0QEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c', 'scsi-SQEMU_QEMU_HARDDISK_a18b030a-ae85-4637-b6b5-bac67700b18c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--756a9a3b--59dc--526e--9851--f6b5408065e4-osd--block--756a9a3b--59dc--526e--9851--f6b5408065e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oUaaae-JHpJ-FipB-6T3E-vLiy-UwcG-Zeom8E', 'scsi-0QEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527', 'scsi-SQEMU_QEMU_HARDDISK_e457a33d-5293-40a2-9d8c-11847a0f2527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296591 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0', 'scsi-SQEMU_QEMU_HARDDISK_eb850900-8a70-4f68-bf30-0b7ae8c748a0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-17-02-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 18:01:09.296621 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.296632 | orchestrator | 2025-08-29 18:01:09.296644 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 18:01:09.296655 | orchestrator | Friday 29 August 2025 17:59:18 +0000 (0:00:00.721) 0:00:18.379 ********* 2025-08-29 18:01:09.296666 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.296678 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.296689 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.296700 | orchestrator | 2025-08-29 18:01:09.296712 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 18:01:09.296723 | orchestrator | Friday 29 August 2025 17:59:18 +0000 (0:00:00.744) 0:00:19.123 ********* 2025-08-29 18:01:09.296734 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.296745 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.296756 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.296767 | orchestrator | 2025-08-29 18:01:09.296778 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 18:01:09.296790 | orchestrator | Friday 29 August 2025 17:59:19 +0000 (0:00:00.518) 0:00:19.642 ********* 2025-08-29 18:01:09.296801 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.296812 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.296823 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.296834 | orchestrator | 2025-08-29 18:01:09.296845 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 18:01:09.296856 | orchestrator | Friday 29 August 2025 17:59:20 +0000 (0:00:00.681) 0:00:20.323 ********* 2025-08-29 18:01:09.296868 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.296886 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.296897 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.296908 | orchestrator | 2025-08-29 18:01:09.296919 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 18:01:09.296929 | orchestrator | Friday 29 August 2025 17:59:20 +0000 (0:00:00.337) 0:00:20.661 ********* 2025-08-29 18:01:09.296938 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.296948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.296958 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.296967 | orchestrator | 2025-08-29 18:01:09.296977 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 18:01:09.296987 | orchestrator | Friday 29 August 2025 17:59:20 +0000 (0:00:00.510) 0:00:21.171 ********* 2025-08-29 18:01:09.296996 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297006 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.297015 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.297025 | orchestrator | 2025-08-29 18:01:09.297034 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 18:01:09.297044 | orchestrator | Friday 29 August 2025 17:59:21 +0000 (0:00:00.592) 0:00:21.763 ********* 2025-08-29 18:01:09.297054 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 18:01:09.297064 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 18:01:09.297073 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 18:01:09.297083 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 18:01:09.297093 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 18:01:09.297102 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 18:01:09.297112 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 18:01:09.297122 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 18:01:09.297131 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 18:01:09.297141 | orchestrator | 2025-08-29 18:01:09.297150 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 18:01:09.297160 | orchestrator | Friday 29 August 2025 17:59:22 +0000 (0:00:01.003) 0:00:22.767 ********* 2025-08-29 18:01:09.297170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 18:01:09.297179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 18:01:09.297189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 18:01:09.297198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 18:01:09.297208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 18:01:09.297217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 18:01:09.297227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297237 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.297246 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 18:01:09.297256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 18:01:09.297282 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 18:01:09.297292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.297301 | orchestrator | 2025-08-29 18:01:09.297311 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 18:01:09.297328 | orchestrator | Friday 29 August 2025 17:59:22 +0000 (0:00:00.424) 0:00:23.192 ********* 2025-08-29 18:01:09.297338 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:01:09.297348 | orchestrator | 2025-08-29 18:01:09.297358 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 18:01:09.297368 | orchestrator | Friday 29 August 2025 17:59:23 +0000 (0:00:00.738) 0:00:23.930 ********* 2025-08-29 18:01:09.297378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297393 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.297403 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.297413 | orchestrator | 2025-08-29 18:01:09.297427 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 18:01:09.297437 | orchestrator | Friday 29 August 2025 17:59:24 +0000 (0:00:00.330) 0:00:24.261 ********* 2025-08-29 18:01:09.297447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297457 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.297466 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.297476 | orchestrator | 2025-08-29 18:01:09.297485 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 18:01:09.297495 | orchestrator | Friday 29 August 2025 17:59:24 +0000 (0:00:00.335) 0:00:24.596 ********* 2025-08-29 18:01:09.297505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297514 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.297524 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:01:09.297533 | orchestrator | 2025-08-29 18:01:09.297543 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 18:01:09.297553 | orchestrator | Friday 29 August 2025 17:59:24 +0000 (0:00:00.340) 0:00:24.937 ********* 2025-08-29 18:01:09.297562 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.297572 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.297581 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.297591 | orchestrator | 2025-08-29 18:01:09.297600 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 18:01:09.297610 | orchestrator | Friday 29 August 2025 17:59:25 +0000 (0:00:00.617) 0:00:25.555 ********* 2025-08-29 18:01:09.297619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 18:01:09.297629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 18:01:09.297639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 18:01:09.297648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297658 | orchestrator | 2025-08-29 18:01:09.297668 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 18:01:09.297677 | orchestrator | Friday 29 August 2025 17:59:25 +0000 (0:00:00.419) 0:00:25.975 ********* 2025-08-29 18:01:09.297687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 18:01:09.297696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 18:01:09.297706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 18:01:09.297715 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297725 | orchestrator | 2025-08-29 18:01:09.297735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 18:01:09.297744 | orchestrator | Friday 29 August 2025 17:59:26 +0000 (0:00:00.402) 0:00:26.378 ********* 2025-08-29 18:01:09.297754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 18:01:09.297763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 18:01:09.297773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 18:01:09.297783 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.297792 | orchestrator | 2025-08-29 18:01:09.297802 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 18:01:09.297812 | orchestrator | Friday 29 August 2025 17:59:26 +0000 (0:00:00.435) 0:00:26.814 ********* 2025-08-29 18:01:09.297821 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:01:09.297831 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:01:09.297841 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:01:09.297850 | orchestrator | 2025-08-29 18:01:09.297860 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 18:01:09.297869 | orchestrator | Friday 29 August 2025 17:59:26 +0000 (0:00:00.360) 0:00:27.174 ********* 2025-08-29 18:01:09.297879 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 18:01:09.297888 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 18:01:09.297903 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 18:01:09.297913 | orchestrator | 2025-08-29 18:01:09.297922 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 18:01:09.297932 | orchestrator | Friday 29 August 2025 17:59:27 +0000 (0:00:00.524) 0:00:27.698 ********* 2025-08-29 18:01:09.297941 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 18:01:09.297951 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 18:01:09.297961 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 18:01:09.297970 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 18:01:09.297980 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 18:01:09.297990 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 18:01:09.297999 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 18:01:09.298009 | orchestrator | 2025-08-29 18:01:09.298070 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 18:01:09.298080 | orchestrator | Friday 29 August 2025 17:59:28 +0000 (0:00:01.104) 0:00:28.802 ********* 2025-08-29 18:01:09.298090 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 18:01:09.298104 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 18:01:09.298114 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 18:01:09.298124 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 18:01:09.298134 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 18:01:09.298143 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 18:01:09.298153 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 18:01:09.298162 | orchestrator | 2025-08-29 18:01:09.298178 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 18:01:09.298187 | orchestrator | Friday 29 August 2025 17:59:30 +0000 (0:00:02.177) 0:00:30.980 ********* 2025-08-29 18:01:09.298197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:01:09.298207 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:01:09.298216 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 18:01:09.298226 | orchestrator | 2025-08-29 18:01:09.298235 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 18:01:09.298245 | orchestrator | Friday 29 August 2025 17:59:31 +0000 (0:00:00.414) 0:00:31.394 ********* 2025-08-29 18:01:09.298255 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 18:01:09.298313 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 18:01:09.298323 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 18:01:09.298334 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 18:01:09.298352 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 18:01:09.298362 | orchestrator | 2025-08-29 18:01:09.298372 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 18:01:09.298381 | orchestrator | Friday 29 August 2025 18:00:16 +0000 (0:00:44.962) 0:01:16.357 ********* 2025-08-29 18:01:09.298391 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298400 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298410 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298420 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298429 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298439 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298448 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 18:01:09.298458 | orchestrator | 2025-08-29 18:01:09.298467 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 18:01:09.298477 | orchestrator | Friday 29 August 2025 18:00:39 +0000 (0:00:23.452) 0:01:39.809 ********* 2025-08-29 18:01:09.298487 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298496 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298506 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298515 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298525 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298534 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298544 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 18:01:09.298553 | orchestrator | 2025-08-29 18:01:09.298563 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 18:01:09.298573 | orchestrator | Friday 29 August 2025 18:00:50 +0000 (0:00:11.385) 0:01:51.195 ********* 2025-08-29 18:01:09.298587 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298596 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 18:01:09.298606 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 18:01:09.298616 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298626 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 18:01:09.298635 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 18:01:09.298650 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298660 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 18:01:09.298670 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 18:01:09.298679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298689 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 18:01:09.298698 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 18:01:09.298717 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298727 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 18:01:09.298737 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 18:01:09.298746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 18:01:09.298756 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 18:01:09.298765 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 18:01:09.298775 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 18:01:09.298785 | orchestrator | 2025-08-29 18:01:09.298794 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:01:09.298804 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 18:01:09.298816 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 18:01:09.298826 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 18:01:09.298835 | orchestrator | 2025-08-29 18:01:09.298845 | orchestrator | 2025-08-29 18:01:09.298855 | orchestrator | 2025-08-29 18:01:09.298864 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:01:09.298874 | orchestrator | Friday 29 August 2025 18:01:07 +0000 (0:00:17.007) 0:02:08.202 ********* 2025-08-29 18:01:09.298884 | orchestrator | =============================================================================== 2025-08-29 18:01:09.298893 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.96s 2025-08-29 18:01:09.298903 | orchestrator | generate keys ---------------------------------------------------------- 23.45s 2025-08-29 18:01:09.298913 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.01s 2025-08-29 18:01:09.298922 | orchestrator | get keys from monitors ------------------------------------------------- 11.39s 2025-08-29 18:01:09.298932 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2025-08-29 18:01:09.298941 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.18s 2025-08-29 18:01:09.298951 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.64s 2025-08-29 18:01:09.298960 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.10s 2025-08-29 18:01:09.298970 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.00s 2025-08-29 18:01:09.298979 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2025-08-29 18:01:09.298989 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2025-08-29 18:01:09.298999 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.75s 2025-08-29 18:01:09.299008 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.74s 2025-08-29 18:01:09.299018 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2025-08-29 18:01:09.299027 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.73s 2025-08-29 18:01:09.299037 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.72s 2025-08-29 18:01:09.299047 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-08-29 18:01:09.299056 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2025-08-29 18:01:09.299066 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2025-08-29 18:01:09.299076 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2025-08-29 18:01:09.299093 | orchestrator | 2025-08-29 18:01:09 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:09.299106 | orchestrator | 2025-08-29 18:01:09 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:09.299117 | orchestrator | 2025-08-29 18:01:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:12.337862 | orchestrator | 2025-08-29 18:01:12 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:12.337954 | orchestrator | 2025-08-29 18:01:12 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:12.339088 | orchestrator | 2025-08-29 18:01:12 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:12.339116 | orchestrator | 2025-08-29 18:01:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:15.378591 | orchestrator | 2025-08-29 18:01:15 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:15.379391 | orchestrator | 2025-08-29 18:01:15 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:15.382604 | orchestrator | 2025-08-29 18:01:15 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:15.382661 | orchestrator | 2025-08-29 18:01:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:18.425187 | orchestrator | 2025-08-29 18:01:18 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:18.426104 | orchestrator | 2025-08-29 18:01:18 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:18.426960 | orchestrator | 2025-08-29 18:01:18 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:18.426980 | orchestrator | 2025-08-29 18:01:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:21.479512 | orchestrator | 2025-08-29 18:01:21 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:21.481238 | orchestrator | 2025-08-29 18:01:21 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:21.487062 | orchestrator | 2025-08-29 18:01:21 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:21.487694 | orchestrator | 2025-08-29 18:01:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:24.534654 | orchestrator | 2025-08-29 18:01:24 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:24.536869 | orchestrator | 2025-08-29 18:01:24 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:24.539330 | orchestrator | 2025-08-29 18:01:24 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:24.539464 | orchestrator | 2025-08-29 18:01:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:27.583352 | orchestrator | 2025-08-29 18:01:27 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:27.584208 | orchestrator | 2025-08-29 18:01:27 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:27.585514 | orchestrator | 2025-08-29 18:01:27 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:27.585797 | orchestrator | 2025-08-29 18:01:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:30.638744 | orchestrator | 2025-08-29 18:01:30 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:30.639548 | orchestrator | 2025-08-29 18:01:30 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:30.640772 | orchestrator | 2025-08-29 18:01:30 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:30.641531 | orchestrator | 2025-08-29 18:01:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:33.687299 | orchestrator | 2025-08-29 18:01:33 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:33.691555 | orchestrator | 2025-08-29 18:01:33 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:33.695004 | orchestrator | 2025-08-29 18:01:33 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:33.695332 | orchestrator | 2025-08-29 18:01:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:36.766690 | orchestrator | 2025-08-29 18:01:36 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:36.768129 | orchestrator | 2025-08-29 18:01:36 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:36.770631 | orchestrator | 2025-08-29 18:01:36 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:36.770923 | orchestrator | 2025-08-29 18:01:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:39.848445 | orchestrator | 2025-08-29 18:01:39 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:39.850357 | orchestrator | 2025-08-29 18:01:39 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:39.851821 | orchestrator | 2025-08-29 18:01:39 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state STARTED 2025-08-29 18:01:39.851856 | orchestrator | 2025-08-29 18:01:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:42.896616 | orchestrator | 2025-08-29 18:01:42 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:42.899864 | orchestrator | 2025-08-29 18:01:42 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:42.900381 | orchestrator | 2025-08-29 18:01:42 | INFO  | Task 4f2a35a4-d68f-4cfa-ace6-76d202a5b94f is in state SUCCESS 2025-08-29 18:01:42.902148 | orchestrator | 2025-08-29 18:01:42 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:01:42.902188 | orchestrator | 2025-08-29 18:01:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:45.959092 | orchestrator | 2025-08-29 18:01:45 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:45.960202 | orchestrator | 2025-08-29 18:01:45 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:45.963500 | orchestrator | 2025-08-29 18:01:45 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:01:45.963562 | orchestrator | 2025-08-29 18:01:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:49.009536 | orchestrator | 2025-08-29 18:01:49 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:49.009981 | orchestrator | 2025-08-29 18:01:49 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:49.011518 | orchestrator | 2025-08-29 18:01:49 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:01:49.011549 | orchestrator | 2025-08-29 18:01:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:52.059580 | orchestrator | 2025-08-29 18:01:52 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:52.061386 | orchestrator | 2025-08-29 18:01:52 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:52.064158 | orchestrator | 2025-08-29 18:01:52 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:01:52.064171 | orchestrator | 2025-08-29 18:01:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:55.114556 | orchestrator | 2025-08-29 18:01:55 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:55.115896 | orchestrator | 2025-08-29 18:01:55 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:55.116967 | orchestrator | 2025-08-29 18:01:55 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:01:55.116996 | orchestrator | 2025-08-29 18:01:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:01:58.161988 | orchestrator | 2025-08-29 18:01:58 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:01:58.163663 | orchestrator | 2025-08-29 18:01:58 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:01:58.165697 | orchestrator | 2025-08-29 18:01:58 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:01:58.165744 | orchestrator | 2025-08-29 18:01:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:01.228696 | orchestrator | 2025-08-29 18:02:01 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:01.232397 | orchestrator | 2025-08-29 18:02:01 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:01.236542 | orchestrator | 2025-08-29 18:02:01 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:01.236567 | orchestrator | 2025-08-29 18:02:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:04.285553 | orchestrator | 2025-08-29 18:02:04 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:04.287507 | orchestrator | 2025-08-29 18:02:04 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:04.289138 | orchestrator | 2025-08-29 18:02:04 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:04.289227 | orchestrator | 2025-08-29 18:02:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:07.340807 | orchestrator | 2025-08-29 18:02:07 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:07.345576 | orchestrator | 2025-08-29 18:02:07 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:07.348440 | orchestrator | 2025-08-29 18:02:07 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:07.348685 | orchestrator | 2025-08-29 18:02:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:10.391547 | orchestrator | 2025-08-29 18:02:10 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:10.391961 | orchestrator | 2025-08-29 18:02:10 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:10.393615 | orchestrator | 2025-08-29 18:02:10 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:10.393640 | orchestrator | 2025-08-29 18:02:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:13.441685 | orchestrator | 2025-08-29 18:02:13 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:13.444651 | orchestrator | 2025-08-29 18:02:13 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:13.446993 | orchestrator | 2025-08-29 18:02:13 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:13.447042 | orchestrator | 2025-08-29 18:02:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:16.502819 | orchestrator | 2025-08-29 18:02:16 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:16.504330 | orchestrator | 2025-08-29 18:02:16 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:16.506613 | orchestrator | 2025-08-29 18:02:16 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:16.506663 | orchestrator | 2025-08-29 18:02:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:19.551579 | orchestrator | 2025-08-29 18:02:19 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:19.554384 | orchestrator | 2025-08-29 18:02:19 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:19.557161 | orchestrator | 2025-08-29 18:02:19 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:19.557192 | orchestrator | 2025-08-29 18:02:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:22.614762 | orchestrator | 2025-08-29 18:02:22 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:22.619148 | orchestrator | 2025-08-29 18:02:22 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:22.621551 | orchestrator | 2025-08-29 18:02:22 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:22.621578 | orchestrator | 2025-08-29 18:02:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:25.669405 | orchestrator | 2025-08-29 18:02:25 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:25.670543 | orchestrator | 2025-08-29 18:02:25 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:25.672600 | orchestrator | 2025-08-29 18:02:25 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:25.672628 | orchestrator | 2025-08-29 18:02:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:28.715430 | orchestrator | 2025-08-29 18:02:28 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:28.718585 | orchestrator | 2025-08-29 18:02:28 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state STARTED 2025-08-29 18:02:28.720784 | orchestrator | 2025-08-29 18:02:28 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:28.720802 | orchestrator | 2025-08-29 18:02:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:31.769045 | orchestrator | 2025-08-29 18:02:31 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:31.770738 | orchestrator | 2025-08-29 18:02:31 | INFO  | Task 9478704c-573f-43e1-ab38-4b1fd0e0ec2e is in state SUCCESS 2025-08-29 18:02:31.771988 | orchestrator | 2025-08-29 18:02:31.772033 | orchestrator | 2025-08-29 18:02:31.772040 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 18:02:31.772045 | orchestrator | 2025-08-29 18:02:31.772049 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 18:02:31.772053 | orchestrator | Friday 29 August 2025 18:01:12 +0000 (0:00:00.176) 0:00:00.176 ********* 2025-08-29 18:02:31.772057 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 18:02:31.772063 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772083 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772087 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 18:02:31.772091 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 18:02:31.772099 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 18:02:31.772103 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 18:02:31.772106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 18:02:31.772110 | orchestrator | 2025-08-29 18:02:31.772114 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 18:02:31.772118 | orchestrator | Friday 29 August 2025 18:01:17 +0000 (0:00:04.281) 0:00:04.458 ********* 2025-08-29 18:02:31.772122 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 18:02:31.772126 | orchestrator | 2025-08-29 18:02:31.772130 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 18:02:31.772134 | orchestrator | Friday 29 August 2025 18:01:18 +0000 (0:00:01.143) 0:00:05.602 ********* 2025-08-29 18:02:31.772138 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 18:02:31.772142 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772145 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772149 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 18:02:31.772153 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772157 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 18:02:31.772160 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 18:02:31.772164 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 18:02:31.772168 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 18:02:31.772172 | orchestrator | 2025-08-29 18:02:31.772175 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 18:02:31.772179 | orchestrator | Friday 29 August 2025 18:01:33 +0000 (0:00:14.821) 0:00:20.424 ********* 2025-08-29 18:02:31.772236 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 18:02:31.772242 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772246 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772250 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 18:02:31.772254 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 18:02:31.772258 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 18:02:31.772291 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 18:02:31.772295 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 18:02:31.772299 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 18:02:31.772303 | orchestrator | 2025-08-29 18:02:31.772307 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:02:31.772310 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:02:31.772321 | orchestrator | 2025-08-29 18:02:31.772325 | orchestrator | 2025-08-29 18:02:31.772329 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:02:31.772332 | orchestrator | Friday 29 August 2025 18:01:40 +0000 (0:00:07.370) 0:00:27.794 ********* 2025-08-29 18:02:31.772336 | orchestrator | =============================================================================== 2025-08-29 18:02:31.772340 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.82s 2025-08-29 18:02:31.772344 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.37s 2025-08-29 18:02:31.772347 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.28s 2025-08-29 18:02:31.772358 | orchestrator | Create share directory -------------------------------------------------- 1.14s 2025-08-29 18:02:31.772479 | orchestrator | 2025-08-29 18:02:31.772488 | orchestrator | 2025-08-29 18:02:31.772494 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:02:31.772500 | orchestrator | 2025-08-29 18:02:31.772515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:02:31.772522 | orchestrator | Friday 29 August 2025 18:00:41 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-08-29 18:02:31.772528 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.772535 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.772542 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.772546 | orchestrator | 2025-08-29 18:02:31.772550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:02:31.772553 | orchestrator | Friday 29 August 2025 18:00:41 +0000 (0:00:00.309) 0:00:00.580 ********* 2025-08-29 18:02:31.772557 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 18:02:31.772561 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 18:02:31.772565 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 18:02:31.772569 | orchestrator | 2025-08-29 18:02:31.772573 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 18:02:31.772576 | orchestrator | 2025-08-29 18:02:31.772580 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 18:02:31.772584 | orchestrator | Friday 29 August 2025 18:00:42 +0000 (0:00:00.428) 0:00:01.009 ********* 2025-08-29 18:02:31.772588 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:02:31.772592 | orchestrator | 2025-08-29 18:02:31.772596 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 18:02:31.772599 | orchestrator | Friday 29 August 2025 18:00:42 +0000 (0:00:00.520) 0:00:01.529 ********* 2025-08-29 18:02:31.772607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.772630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.772636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.772645 | orchestrator | 2025-08-29 18:02:31.772649 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 18:02:31.772653 | orchestrator | Friday 29 August 2025 18:00:43 +0000 (0:00:01.140) 0:00:02.670 ********* 2025-08-29 18:02:31.772656 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.772660 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.772664 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.772667 | orchestrator | 2025-08-29 18:02:31.772671 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 18:02:31.772675 | orchestrator | Friday 29 August 2025 18:00:44 +0000 (0:00:00.506) 0:00:03.177 ********* 2025-08-29 18:02:31.772682 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 18:02:31.772689 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 18:02:31.772693 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 18:02:31.772696 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 18:02:31.772700 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 18:02:31.772704 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 18:02:31.772707 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 18:02:31.772711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 18:02:31.772715 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 18:02:31.772719 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 18:02:31.772722 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 18:02:31.772726 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 18:02:31.772730 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 18:02:31.772733 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 18:02:31.772737 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 18:02:31.772741 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 18:02:31.772744 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 18:02:31.772748 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 18:02:31.772752 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 18:02:31.772756 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 18:02:31.772763 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 18:02:31.772767 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 18:02:31.772771 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 18:02:31.772774 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 18:02:31.772779 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 18:02:31.772784 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 18:02:31.772788 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 18:02:31.772792 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 18:02:31.772796 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 18:02:31.772799 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 18:02:31.772803 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 18:02:31.772807 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 18:02:31.772811 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 18:02:31.772814 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 18:02:31.772818 | orchestrator | 2025-08-29 18:02:31.772822 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.772826 | orchestrator | Friday 29 August 2025 18:00:44 +0000 (0:00:00.726) 0:00:03.904 ********* 2025-08-29 18:02:31.772829 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.772833 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.772837 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.772841 | orchestrator | 2025-08-29 18:02:31.772844 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.772850 | orchestrator | Friday 29 August 2025 18:00:45 +0000 (0:00:00.316) 0:00:04.220 ********* 2025-08-29 18:02:31.772854 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.772858 | orchestrator | 2025-08-29 18:02:31.772864 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.772869 | orchestrator | Friday 29 August 2025 18:00:45 +0000 (0:00:00.171) 0:00:04.392 ********* 2025-08-29 18:02:31.772872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.772876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.772880 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.772883 | orchestrator | 2025-08-29 18:02:31.772887 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.772891 | orchestrator | Friday 29 August 2025 18:00:45 +0000 (0:00:00.543) 0:00:04.936 ********* 2025-08-29 18:02:31.772894 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.772898 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.772902 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.772913 | orchestrator | 2025-08-29 18:02:31.772916 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.772920 | orchestrator | Friday 29 August 2025 18:00:46 +0000 (0:00:00.338) 0:00:05.274 ********* 2025-08-29 18:02:31.772924 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.772927 | orchestrator | 2025-08-29 18:02:31.772931 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.772935 | orchestrator | Friday 29 August 2025 18:00:46 +0000 (0:00:00.136) 0:00:05.411 ********* 2025-08-29 18:02:31.772938 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.772942 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.772946 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.772950 | orchestrator | 2025-08-29 18:02:31.772953 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.772957 | orchestrator | Friday 29 August 2025 18:00:46 +0000 (0:00:00.302) 0:00:05.714 ********* 2025-08-29 18:02:31.772961 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.772964 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.772968 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.772972 | orchestrator | 2025-08-29 18:02:31.772976 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.772980 | orchestrator | Friday 29 August 2025 18:00:47 +0000 (0:00:00.309) 0:00:06.023 ********* 2025-08-29 18:02:31.772983 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.772987 | orchestrator | 2025-08-29 18:02:31.772991 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.772994 | orchestrator | Friday 29 August 2025 18:00:47 +0000 (0:00:00.352) 0:00:06.375 ********* 2025-08-29 18:02:31.772998 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773002 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773005 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773009 | orchestrator | 2025-08-29 18:02:31.773013 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773016 | orchestrator | Friday 29 August 2025 18:00:47 +0000 (0:00:00.318) 0:00:06.694 ********* 2025-08-29 18:02:31.773020 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773024 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773027 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773031 | orchestrator | 2025-08-29 18:02:31.773035 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773039 | orchestrator | Friday 29 August 2025 18:00:48 +0000 (0:00:00.320) 0:00:07.015 ********* 2025-08-29 18:02:31.773042 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773046 | orchestrator | 2025-08-29 18:02:31.773050 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773053 | orchestrator | Friday 29 August 2025 18:00:48 +0000 (0:00:00.142) 0:00:07.157 ********* 2025-08-29 18:02:31.773057 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773061 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773064 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773068 | orchestrator | 2025-08-29 18:02:31.773072 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773076 | orchestrator | Friday 29 August 2025 18:00:48 +0000 (0:00:00.309) 0:00:07.467 ********* 2025-08-29 18:02:31.773079 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773083 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773087 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773091 | orchestrator | 2025-08-29 18:02:31.773094 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773098 | orchestrator | Friday 29 August 2025 18:00:49 +0000 (0:00:00.575) 0:00:08.043 ********* 2025-08-29 18:02:31.773102 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773105 | orchestrator | 2025-08-29 18:02:31.773109 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773113 | orchestrator | Friday 29 August 2025 18:00:49 +0000 (0:00:00.138) 0:00:08.181 ********* 2025-08-29 18:02:31.773121 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773125 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773130 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773134 | orchestrator | 2025-08-29 18:02:31.773138 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773143 | orchestrator | Friday 29 August 2025 18:00:49 +0000 (0:00:00.310) 0:00:08.492 ********* 2025-08-29 18:02:31.773147 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773151 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773156 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773160 | orchestrator | 2025-08-29 18:02:31.773164 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773168 | orchestrator | Friday 29 August 2025 18:00:49 +0000 (0:00:00.365) 0:00:08.858 ********* 2025-08-29 18:02:31.773172 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773177 | orchestrator | 2025-08-29 18:02:31.773181 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773185 | orchestrator | Friday 29 August 2025 18:00:50 +0000 (0:00:00.153) 0:00:09.011 ********* 2025-08-29 18:02:31.773189 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773194 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773198 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773202 | orchestrator | 2025-08-29 18:02:31.773207 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773214 | orchestrator | Friday 29 August 2025 18:00:50 +0000 (0:00:00.529) 0:00:09.540 ********* 2025-08-29 18:02:31.773218 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773225 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773229 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773233 | orchestrator | 2025-08-29 18:02:31.773238 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773242 | orchestrator | Friday 29 August 2025 18:00:50 +0000 (0:00:00.323) 0:00:09.863 ********* 2025-08-29 18:02:31.773246 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773251 | orchestrator | 2025-08-29 18:02:31.773255 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773259 | orchestrator | Friday 29 August 2025 18:00:51 +0000 (0:00:00.209) 0:00:10.073 ********* 2025-08-29 18:02:31.773285 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773290 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773294 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773298 | orchestrator | 2025-08-29 18:02:31.773302 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773307 | orchestrator | Friday 29 August 2025 18:00:51 +0000 (0:00:00.370) 0:00:10.443 ********* 2025-08-29 18:02:31.773311 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773315 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773319 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773323 | orchestrator | 2025-08-29 18:02:31.773328 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773332 | orchestrator | Friday 29 August 2025 18:00:51 +0000 (0:00:00.430) 0:00:10.873 ********* 2025-08-29 18:02:31.773337 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773341 | orchestrator | 2025-08-29 18:02:31.773345 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773350 | orchestrator | Friday 29 August 2025 18:00:52 +0000 (0:00:00.150) 0:00:11.024 ********* 2025-08-29 18:02:31.773354 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773358 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773362 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773367 | orchestrator | 2025-08-29 18:02:31.773371 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773376 | orchestrator | Friday 29 August 2025 18:00:52 +0000 (0:00:00.675) 0:00:11.699 ********* 2025-08-29 18:02:31.773386 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773391 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773395 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773399 | orchestrator | 2025-08-29 18:02:31.773403 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773407 | orchestrator | Friday 29 August 2025 18:00:53 +0000 (0:00:00.435) 0:00:12.135 ********* 2025-08-29 18:02:31.773412 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773416 | orchestrator | 2025-08-29 18:02:31.773420 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773424 | orchestrator | Friday 29 August 2025 18:00:53 +0000 (0:00:00.137) 0:00:12.272 ********* 2025-08-29 18:02:31.773429 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773433 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773437 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773441 | orchestrator | 2025-08-29 18:02:31.773445 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 18:02:31.773449 | orchestrator | Friday 29 August 2025 18:00:53 +0000 (0:00:00.316) 0:00:12.589 ********* 2025-08-29 18:02:31.773454 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:02:31.773458 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:02:31.773463 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:02:31.773546 | orchestrator | 2025-08-29 18:02:31.773551 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 18:02:31.773554 | orchestrator | Friday 29 August 2025 18:00:54 +0000 (0:00:00.553) 0:00:13.143 ********* 2025-08-29 18:02:31.773559 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773562 | orchestrator | 2025-08-29 18:02:31.773567 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 18:02:31.773570 | orchestrator | Friday 29 August 2025 18:00:54 +0000 (0:00:00.153) 0:00:13.296 ********* 2025-08-29 18:02:31.773574 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773578 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773581 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773585 | orchestrator | 2025-08-29 18:02:31.773589 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 18:02:31.773592 | orchestrator | Friday 29 August 2025 18:00:54 +0000 (0:00:00.332) 0:00:13.629 ********* 2025-08-29 18:02:31.773596 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:02:31.773600 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:02:31.773604 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:02:31.773607 | orchestrator | 2025-08-29 18:02:31.773611 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 18:02:31.773615 | orchestrator | Friday 29 August 2025 18:00:56 +0000 (0:00:01.866) 0:00:15.496 ********* 2025-08-29 18:02:31.773619 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 18:02:31.773622 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 18:02:31.773626 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 18:02:31.773630 | orchestrator | 2025-08-29 18:02:31.773634 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 18:02:31.773637 | orchestrator | Friday 29 August 2025 18:00:58 +0000 (0:00:02.321) 0:00:17.817 ********* 2025-08-29 18:02:31.773641 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 18:02:31.773645 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 18:02:31.773649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 18:02:31.773653 | orchestrator | 2025-08-29 18:02:31.773660 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 18:02:31.773667 | orchestrator | Friday 29 August 2025 18:01:01 +0000 (0:00:02.707) 0:00:20.525 ********* 2025-08-29 18:02:31.773676 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 18:02:31.773680 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 18:02:31.773683 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 18:02:31.773687 | orchestrator | 2025-08-29 18:02:31.773691 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 18:02:31.773695 | orchestrator | Friday 29 August 2025 18:01:03 +0000 (0:00:01.634) 0:00:22.160 ********* 2025-08-29 18:02:31.773698 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773706 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773709 | orchestrator | 2025-08-29 18:02:31.773713 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 18:02:31.773717 | orchestrator | Friday 29 August 2025 18:01:03 +0000 (0:00:00.364) 0:00:22.525 ********* 2025-08-29 18:02:31.773721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773724 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773728 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773732 | orchestrator | 2025-08-29 18:02:31.773735 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 18:02:31.773739 | orchestrator | Friday 29 August 2025 18:01:03 +0000 (0:00:00.353) 0:00:22.878 ********* 2025-08-29 18:02:31.773743 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:02:31.773747 | orchestrator | 2025-08-29 18:02:31.773750 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 18:02:31.773754 | orchestrator | Friday 29 August 2025 18:01:04 +0000 (0:00:00.794) 0:00:23.672 ********* 2025-08-29 18:02:31.773759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.773775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.773779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.773789 | orchestrator | 2025-08-29 18:02:31.773793 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 18:02:31.773797 | orchestrator | Friday 29 August 2025 18:01:06 +0000 (0:00:01.690) 0:00:25.362 ********* 2025-08-29 18:02:31.773806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 18:02:31.773811 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 18:02:31.773828 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 18:02:31.773836 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773840 | orchestrator | 2025-08-29 18:02:31.773843 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 18:02:31.773847 | orchestrator | Friday 29 August 2025 18:01:07 +0000 (0:00:00.712) 0:00:26.074 ********* 2025-08-29 18:02:31.773858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 18:02:31.773866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 18:02:31.773874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 18:02:31.773893 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773897 | orchestrator | 2025-08-29 18:02:31.773901 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 18:02:31.773904 | orchestrator | Friday 29 August 2025 18:01:08 +0000 (0:00:01.405) 0:00:27.479 ********* 2025-08-29 18:02:31.773908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.773923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.773928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 18:02:31.773936 | orchestrator | 2025-08-29 18:02:31.773940 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 18:02:31.773943 | orchestrator | Friday 29 August 2025 18:01:09 +0000 (0:00:01.410) 0:00:28.890 ********* 2025-08-29 18:02:31.773947 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:02:31.773951 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:02:31.773955 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:02:31.773958 | orchestrator | 2025-08-29 18:02:31.773962 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 18:02:31.773968 | orchestrator | Friday 29 August 2025 18:01:10 +0000 (0:00:00.363) 0:00:29.254 ********* 2025-08-29 18:02:31.773974 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:02:31.773978 | orchestrator | 2025-08-29 18:02:31.773982 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 18:02:31.773985 | orchestrator | Friday 29 August 2025 18:01:11 +0000 (0:00:00.788) 0:00:30.042 ********* 2025-08-29 18:02:31.773989 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:02:31.773993 | orchestrator | 2025-08-29 18:02:31.773997 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 18:02:31.774000 | orchestrator | Friday 29 August 2025 18:01:13 +0000 (0:00:02.117) 0:00:32.159 ********* 2025-08-29 18:02:31.774004 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:02:31.774008 | orchestrator | 2025-08-29 18:02:31.774012 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 18:02:31.774047 | orchestrator | Friday 29 August 2025 18:01:15 +0000 (0:00:02.033) 0:00:34.193 ********* 2025-08-29 18:02:31.774051 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:02:31.774055 | orchestrator | 2025-08-29 18:02:31.774059 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 18:02:31.774063 | orchestrator | Friday 29 August 2025 18:01:29 +0000 (0:00:14.689) 0:00:48.882 ********* 2025-08-29 18:02:31.774066 | orchestrator | 2025-08-29 18:02:31.774070 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 18:02:31.774074 | orchestrator | Friday 29 August 2025 18:01:29 +0000 (0:00:00.070) 0:00:48.953 ********* 2025-08-29 18:02:31.774078 | orchestrator | 2025-08-29 18:02:31.774081 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 18:02:31.774085 | orchestrator | Friday 29 August 2025 18:01:30 +0000 (0:00:00.077) 0:00:49.030 ********* 2025-08-29 18:02:31.774089 | orchestrator | 2025-08-29 18:02:31.774092 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 18:02:31.774096 | orchestrator | Friday 29 August 2025 18:01:30 +0000 (0:00:00.068) 0:00:49.099 ********* 2025-08-29 18:02:31.774100 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:02:31.774104 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:02:31.774107 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:02:31.774111 | orchestrator | 2025-08-29 18:02:31.774115 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:02:31.774119 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 18:02:31.774126 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 18:02:31.774130 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 18:02:31.774134 | orchestrator | 2025-08-29 18:02:31.774138 | orchestrator | 2025-08-29 18:02:31.774141 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:02:31.774145 | orchestrator | Friday 29 August 2025 18:02:30 +0000 (0:01:00.082) 0:01:49.182 ********* 2025-08-29 18:02:31.774149 | orchestrator | =============================================================================== 2025-08-29 18:02:31.774152 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.08s 2025-08-29 18:02:31.774156 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.69s 2025-08-29 18:02:31.774160 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.71s 2025-08-29 18:02:31.774163 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.32s 2025-08-29 18:02:31.774167 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.12s 2025-08-29 18:02:31.774171 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.03s 2025-08-29 18:02:31.774174 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.87s 2025-08-29 18:02:31.774178 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.69s 2025-08-29 18:02:31.774182 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.63s 2025-08-29 18:02:31.774186 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.41s 2025-08-29 18:02:31.774189 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.41s 2025-08-29 18:02:31.774193 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2025-08-29 18:02:31.774197 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-08-29 18:02:31.774200 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-08-29 18:02:31.774204 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-08-29 18:02:31.774208 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2025-08-29 18:02:31.774212 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.68s 2025-08-29 18:02:31.774215 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-08-29 18:02:31.774219 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-08-29 18:02:31.774223 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2025-08-29 18:02:31.774226 | orchestrator | 2025-08-29 18:02:31 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:31.774233 | orchestrator | 2025-08-29 18:02:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:34.816237 | orchestrator | 2025-08-29 18:02:34 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:34.817855 | orchestrator | 2025-08-29 18:02:34 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:34.818246 | orchestrator | 2025-08-29 18:02:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:37.864738 | orchestrator | 2025-08-29 18:02:37 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:37.864814 | orchestrator | 2025-08-29 18:02:37 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:37.864841 | orchestrator | 2025-08-29 18:02:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:40.900578 | orchestrator | 2025-08-29 18:02:40 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:40.902498 | orchestrator | 2025-08-29 18:02:40 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state STARTED 2025-08-29 18:02:40.902697 | orchestrator | 2025-08-29 18:02:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:43.959102 | orchestrator | 2025-08-29 18:02:43 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:02:43.959942 | orchestrator | 2025-08-29 18:02:43 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:43.963594 | orchestrator | 2025-08-29 18:02:43 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:02:43.964083 | orchestrator | 2025-08-29 18:02:43 | INFO  | Task 222ede74-0e1c-4652-8c88-fd0070b61d59 is in state STARTED 2025-08-29 18:02:43.966380 | orchestrator | 2025-08-29 18:02:43 | INFO  | Task 167b6640-d996-4f10-886d-6f3c3f717aac is in state SUCCESS 2025-08-29 18:02:43.966434 | orchestrator | 2025-08-29 18:02:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:47.058259 | orchestrator | 2025-08-29 18:02:47 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:02:47.061079 | orchestrator | 2025-08-29 18:02:47 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:47.062209 | orchestrator | 2025-08-29 18:02:47 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:02:47.063472 | orchestrator | 2025-08-29 18:02:47 | INFO  | Task 222ede74-0e1c-4652-8c88-fd0070b61d59 is in state STARTED 2025-08-29 18:02:47.063504 | orchestrator | 2025-08-29 18:02:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:50.103538 | orchestrator | 2025-08-29 18:02:50 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:02:50.105625 | orchestrator | 2025-08-29 18:02:50 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:50.106472 | orchestrator | 2025-08-29 18:02:50 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:02:50.107320 | orchestrator | 2025-08-29 18:02:50 | INFO  | Task 222ede74-0e1c-4652-8c88-fd0070b61d59 is in state SUCCESS 2025-08-29 18:02:50.107352 | orchestrator | 2025-08-29 18:02:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:53.139028 | orchestrator | 2025-08-29 18:02:53 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:02:53.139195 | orchestrator | 2025-08-29 18:02:53 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:53.141105 | orchestrator | 2025-08-29 18:02:53 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:02:53.143363 | orchestrator | 2025-08-29 18:02:53 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:02:53.143655 | orchestrator | 2025-08-29 18:02:53 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:02:53.143795 | orchestrator | 2025-08-29 18:02:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:56.195049 | orchestrator | 2025-08-29 18:02:56 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:02:56.195837 | orchestrator | 2025-08-29 18:02:56 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:56.196747 | orchestrator | 2025-08-29 18:02:56 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:02:56.197129 | orchestrator | 2025-08-29 18:02:56 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:02:56.198251 | orchestrator | 2025-08-29 18:02:56 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:02:56.198366 | orchestrator | 2025-08-29 18:02:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:02:59.240327 | orchestrator | 2025-08-29 18:02:59 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:02:59.242436 | orchestrator | 2025-08-29 18:02:59 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:02:59.244958 | orchestrator | 2025-08-29 18:02:59 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:02:59.247163 | orchestrator | 2025-08-29 18:02:59 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:02:59.247952 | orchestrator | 2025-08-29 18:02:59 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:02:59.248238 | orchestrator | 2025-08-29 18:02:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:02.308161 | orchestrator | 2025-08-29 18:03:02 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:02.309033 | orchestrator | 2025-08-29 18:03:02 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:02.310299 | orchestrator | 2025-08-29 18:03:02 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:02.311006 | orchestrator | 2025-08-29 18:03:02 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:02.312646 | orchestrator | 2025-08-29 18:03:02 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:02.312694 | orchestrator | 2025-08-29 18:03:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:05.364990 | orchestrator | 2025-08-29 18:03:05 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:05.367431 | orchestrator | 2025-08-29 18:03:05 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:05.369656 | orchestrator | 2025-08-29 18:03:05 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:05.371482 | orchestrator | 2025-08-29 18:03:05 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:05.373523 | orchestrator | 2025-08-29 18:03:05 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:05.373619 | orchestrator | 2025-08-29 18:03:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:08.404108 | orchestrator | 2025-08-29 18:03:08 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:08.405789 | orchestrator | 2025-08-29 18:03:08 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:08.407345 | orchestrator | 2025-08-29 18:03:08 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:08.408310 | orchestrator | 2025-08-29 18:03:08 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:08.409677 | orchestrator | 2025-08-29 18:03:08 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:08.409772 | orchestrator | 2025-08-29 18:03:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:11.459396 | orchestrator | 2025-08-29 18:03:11 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:11.463489 | orchestrator | 2025-08-29 18:03:11 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:11.464561 | orchestrator | 2025-08-29 18:03:11 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:11.466884 | orchestrator | 2025-08-29 18:03:11 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:11.470200 | orchestrator | 2025-08-29 18:03:11 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:11.470223 | orchestrator | 2025-08-29 18:03:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:14.515573 | orchestrator | 2025-08-29 18:03:14 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:14.516093 | orchestrator | 2025-08-29 18:03:14 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:14.518127 | orchestrator | 2025-08-29 18:03:14 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:14.519061 | orchestrator | 2025-08-29 18:03:14 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:14.519882 | orchestrator | 2025-08-29 18:03:14 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:14.519905 | orchestrator | 2025-08-29 18:03:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:17.565378 | orchestrator | 2025-08-29 18:03:17 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:17.566251 | orchestrator | 2025-08-29 18:03:17 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:17.567595 | orchestrator | 2025-08-29 18:03:17 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:17.568529 | orchestrator | 2025-08-29 18:03:17 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:17.569645 | orchestrator | 2025-08-29 18:03:17 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:17.569787 | orchestrator | 2025-08-29 18:03:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:20.610489 | orchestrator | 2025-08-29 18:03:20 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:20.610654 | orchestrator | 2025-08-29 18:03:20 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state STARTED 2025-08-29 18:03:20.612628 | orchestrator | 2025-08-29 18:03:20 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:20.612654 | orchestrator | 2025-08-29 18:03:20 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:20.612666 | orchestrator | 2025-08-29 18:03:20 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:20.612677 | orchestrator | 2025-08-29 18:03:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:23.645811 | orchestrator | 2025-08-29 18:03:23 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:23.646814 | orchestrator | 2025-08-29 18:03:23 | INFO  | Task c819d1bd-886a-4f76-a1e0-d6adc7621b06 is in state SUCCESS 2025-08-29 18:03:23.648023 | orchestrator | 2025-08-29 18:03:23.648100 | orchestrator | 2025-08-29 18:03:23.648117 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 18:03:23.648147 | orchestrator | 2025-08-29 18:03:23.648169 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 18:03:23.648223 | orchestrator | Friday 29 August 2025 18:01:45 +0000 (0:00:00.253) 0:00:00.253 ********* 2025-08-29 18:03:23.648324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 18:03:23.648341 | orchestrator | 2025-08-29 18:03:23.648352 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 18:03:23.648375 | orchestrator | Friday 29 August 2025 18:01:45 +0000 (0:00:00.241) 0:00:00.494 ********* 2025-08-29 18:03:23.648386 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 18:03:23.648397 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 18:03:23.648409 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 18:03:23.648464 | orchestrator | 2025-08-29 18:03:23.648475 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 18:03:23.648488 | orchestrator | Friday 29 August 2025 18:01:46 +0000 (0:00:01.319) 0:00:01.813 ********* 2025-08-29 18:03:23.648499 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 18:03:23.648510 | orchestrator | 2025-08-29 18:03:23.648521 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 18:03:23.648531 | orchestrator | Friday 29 August 2025 18:01:48 +0000 (0:00:01.280) 0:00:03.094 ********* 2025-08-29 18:03:23.648542 | orchestrator | changed: [testbed-manager] 2025-08-29 18:03:23.648553 | orchestrator | 2025-08-29 18:03:23.648564 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 18:03:23.648574 | orchestrator | Friday 29 August 2025 18:01:49 +0000 (0:00:01.079) 0:00:04.174 ********* 2025-08-29 18:03:23.648604 | orchestrator | changed: [testbed-manager] 2025-08-29 18:03:23.648617 | orchestrator | 2025-08-29 18:03:23.648630 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 18:03:23.648642 | orchestrator | Friday 29 August 2025 18:01:50 +0000 (0:00:00.989) 0:00:05.164 ********* 2025-08-29 18:03:23.648654 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 18:03:23.648667 | orchestrator | ok: [testbed-manager] 2025-08-29 18:03:23.648679 | orchestrator | 2025-08-29 18:03:23.648692 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 18:03:23.648703 | orchestrator | Friday 29 August 2025 18:02:31 +0000 (0:00:41.525) 0:00:46.689 ********* 2025-08-29 18:03:23.648716 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 18:03:23.648729 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 18:03:23.648742 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 18:03:23.648754 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 18:03:23.648766 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 18:03:23.648778 | orchestrator | 2025-08-29 18:03:23.648816 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 18:03:23.648828 | orchestrator | Friday 29 August 2025 18:02:35 +0000 (0:00:04.192) 0:00:50.881 ********* 2025-08-29 18:03:23.648838 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 18:03:23.648849 | orchestrator | 2025-08-29 18:03:23.648860 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 18:03:23.648870 | orchestrator | Friday 29 August 2025 18:02:36 +0000 (0:00:00.522) 0:00:51.403 ********* 2025-08-29 18:03:23.648881 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:03:23.648891 | orchestrator | 2025-08-29 18:03:23.648902 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 18:03:23.648913 | orchestrator | Friday 29 August 2025 18:02:36 +0000 (0:00:00.135) 0:00:51.539 ********* 2025-08-29 18:03:23.648923 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:03:23.648934 | orchestrator | 2025-08-29 18:03:23.648944 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 18:03:23.648955 | orchestrator | Friday 29 August 2025 18:02:36 +0000 (0:00:00.396) 0:00:51.935 ********* 2025-08-29 18:03:23.648976 | orchestrator | changed: [testbed-manager] 2025-08-29 18:03:23.648987 | orchestrator | 2025-08-29 18:03:23.648998 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 18:03:23.649008 | orchestrator | Friday 29 August 2025 18:02:39 +0000 (0:00:02.130) 0:00:54.066 ********* 2025-08-29 18:03:23.649019 | orchestrator | changed: [testbed-manager] 2025-08-29 18:03:23.649030 | orchestrator | 2025-08-29 18:03:23.649040 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 18:03:23.649051 | orchestrator | Friday 29 August 2025 18:02:39 +0000 (0:00:00.862) 0:00:54.928 ********* 2025-08-29 18:03:23.649061 | orchestrator | changed: [testbed-manager] 2025-08-29 18:03:23.649072 | orchestrator | 2025-08-29 18:03:23.649082 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 18:03:23.649093 | orchestrator | Friday 29 August 2025 18:02:40 +0000 (0:00:00.757) 0:00:55.685 ********* 2025-08-29 18:03:23.649104 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 18:03:23.649114 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 18:03:23.649126 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 18:03:23.649137 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 18:03:23.649147 | orchestrator | 2025-08-29 18:03:23.649158 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:03:23.649169 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:03:23.649181 | orchestrator | 2025-08-29 18:03:23.649192 | orchestrator | 2025-08-29 18:03:23.649218 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:03:23.649230 | orchestrator | Friday 29 August 2025 18:02:42 +0000 (0:00:01.523) 0:00:57.209 ********* 2025-08-29 18:03:23.649261 | orchestrator | =============================================================================== 2025-08-29 18:03:23.649320 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.53s 2025-08-29 18:03:23.649331 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.19s 2025-08-29 18:03:23.649342 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.13s 2025-08-29 18:03:23.649352 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.52s 2025-08-29 18:03:23.649363 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.32s 2025-08-29 18:03:23.649373 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.28s 2025-08-29 18:03:23.649384 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.08s 2025-08-29 18:03:23.649394 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.99s 2025-08-29 18:03:23.649405 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.86s 2025-08-29 18:03:23.649415 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.76s 2025-08-29 18:03:23.649426 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.52s 2025-08-29 18:03:23.649436 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.40s 2025-08-29 18:03:23.649447 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-08-29 18:03:23.649457 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-08-29 18:03:23.649468 | orchestrator | 2025-08-29 18:03:23.649479 | orchestrator | 2025-08-29 18:03:23.649489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:03:23.649500 | orchestrator | 2025-08-29 18:03:23.649510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:03:23.649521 | orchestrator | Friday 29 August 2025 18:02:46 +0000 (0:00:00.203) 0:00:00.203 ********* 2025-08-29 18:03:23.649532 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.649543 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.649554 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.649572 | orchestrator | 2025-08-29 18:03:23.649583 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:03:23.649594 | orchestrator | Friday 29 August 2025 18:02:47 +0000 (0:00:00.310) 0:00:00.514 ********* 2025-08-29 18:03:23.649605 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 18:03:23.649615 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 18:03:23.649626 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 18:03:23.649637 | orchestrator | 2025-08-29 18:03:23.649647 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 18:03:23.649658 | orchestrator | 2025-08-29 18:03:23.649668 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 18:03:23.649684 | orchestrator | Friday 29 August 2025 18:02:47 +0000 (0:00:00.726) 0:00:01.241 ********* 2025-08-29 18:03:23.649695 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.649706 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.649716 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.649727 | orchestrator | 2025-08-29 18:03:23.649738 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:03:23.649750 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:03:23.649761 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:03:23.649772 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:03:23.649783 | orchestrator | 2025-08-29 18:03:23.649793 | orchestrator | 2025-08-29 18:03:23.649804 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:03:23.649814 | orchestrator | Friday 29 August 2025 18:02:48 +0000 (0:00:00.788) 0:00:02.029 ********* 2025-08-29 18:03:23.649825 | orchestrator | =============================================================================== 2025-08-29 18:03:23.649836 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.79s 2025-08-29 18:03:23.649846 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-08-29 18:03:23.649857 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-08-29 18:03:23.649867 | orchestrator | 2025-08-29 18:03:23.649878 | orchestrator | 2025-08-29 18:03:23.649889 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:03:23.649899 | orchestrator | 2025-08-29 18:03:23.649910 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:03:23.649921 | orchestrator | Friday 29 August 2025 18:00:41 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-08-29 18:03:23.649931 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.649942 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.649953 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.649963 | orchestrator | 2025-08-29 18:03:23.649974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:03:23.649985 | orchestrator | Friday 29 August 2025 18:00:41 +0000 (0:00:00.292) 0:00:00.568 ********* 2025-08-29 18:03:23.649995 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 18:03:23.650006 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 18:03:23.650069 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 18:03:23.650083 | orchestrator | 2025-08-29 18:03:23.650094 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 18:03:23.650105 | orchestrator | 2025-08-29 18:03:23.650125 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 18:03:23.650137 | orchestrator | Friday 29 August 2025 18:00:42 +0000 (0:00:00.484) 0:00:01.052 ********* 2025-08-29 18:03:23.650148 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:03:23.650165 | orchestrator | 2025-08-29 18:03:23.650176 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 18:03:23.650187 | orchestrator | Friday 29 August 2025 18:00:42 +0000 (0:00:00.577) 0:00:01.629 ********* 2025-08-29 18:03:23.650203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.650225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.650239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.650258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650364 | orchestrator | 2025-08-29 18:03:23.650375 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 18:03:23.650386 | orchestrator | Friday 29 August 2025 18:00:44 +0000 (0:00:01.785) 0:00:03.415 ********* 2025-08-29 18:03:23.650397 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 18:03:23.650408 | orchestrator | 2025-08-29 18:03:23.650418 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 18:03:23.650429 | orchestrator | Friday 29 August 2025 18:00:45 +0000 (0:00:00.952) 0:00:04.368 ********* 2025-08-29 18:03:23.650439 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.650450 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.650471 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.650482 | orchestrator | 2025-08-29 18:03:23.650492 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 18:03:23.650503 | orchestrator | Friday 29 August 2025 18:00:45 +0000 (0:00:00.567) 0:00:04.935 ********* 2025-08-29 18:03:23.650513 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:03:23.650524 | orchestrator | 2025-08-29 18:03:23.650535 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 18:03:23.650551 | orchestrator | Friday 29 August 2025 18:00:46 +0000 (0:00:00.820) 0:00:05.756 ********* 2025-08-29 18:03:23.650562 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:03:23.650573 | orchestrator | 2025-08-29 18:03:23.650583 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 18:03:23.650594 | orchestrator | Friday 29 August 2025 18:00:47 +0000 (0:00:00.634) 0:00:06.390 ********* 2025-08-29 18:03:23.650606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.650623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.650636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.650654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.650734 | orchestrator | 2025-08-29 18:03:23.650751 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 18:03:23.650761 | orchestrator | Friday 29 August 2025 18:00:50 +0000 (0:00:03.399) 0:00:09.790 ********* 2025-08-29 18:03:23.650780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 18:03:23.650792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.650803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 18:03:23.650826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 18:03:23.650837 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.650849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.650867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 18:03:23.650879 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.650899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 18:03:23.650911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.650922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 18:03:23.650933 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.650944 | orchestrator | 2025-08-29 18:03:23.650955 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 18:03:23.650966 | orchestrator | Friday 29 August 2025 18:00:51 +0000 (0:00:00.658) 0:00:10.448 ********* 2025-08-29 18:03:23.650982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 18:03:23.651000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 18:03:23.651032 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.651044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 18:03:23.651056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 18:03:23.651089 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.651100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 18:03:23.651119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 18:03:23.651142 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.651153 | orchestrator | 2025-08-29 18:03:23.651164 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 18:03:23.651174 | orchestrator | Friday 29 August 2025 18:00:52 +0000 (0:00:00.847) 0:00:11.295 ********* 2025-08-29 18:03:23.651186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651374 | orchestrator | 2025-08-29 18:03:23.651385 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 18:03:23.651396 | orchestrator | Friday 29 August 2025 18:00:55 +0000 (0:00:03.556) 0:00:14.852 ********* 2025-08-29 18:03:23.651416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651550 | orchestrator | 2025-08-29 18:03:23.651561 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 18:03:23.651571 | orchestrator | Friday 29 August 2025 18:01:01 +0000 (0:00:05.995) 0:00:20.847 ********* 2025-08-29 18:03:23.651582 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.651593 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:03:23.651604 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:03:23.651614 | orchestrator | 2025-08-29 18:03:23.651625 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 18:03:23.651635 | orchestrator | Friday 29 August 2025 18:01:03 +0000 (0:00:01.565) 0:00:22.413 ********* 2025-08-29 18:03:23.651646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.651656 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.651665 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.651675 | orchestrator | 2025-08-29 18:03:23.651684 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 18:03:23.651694 | orchestrator | Friday 29 August 2025 18:01:03 +0000 (0:00:00.545) 0:00:22.959 ********* 2025-08-29 18:03:23.651703 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.651713 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.651722 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.651731 | orchestrator | 2025-08-29 18:03:23.651741 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 18:03:23.651750 | orchestrator | Friday 29 August 2025 18:01:04 +0000 (0:00:00.274) 0:00:23.234 ********* 2025-08-29 18:03:23.651760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.651769 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.651779 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.651788 | orchestrator | 2025-08-29 18:03:23.651797 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 18:03:23.651807 | orchestrator | Friday 29 August 2025 18:01:04 +0000 (0:00:00.597) 0:00:23.831 ********* 2025-08-29 18:03:23.651823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.651894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 18:03:23.651911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.651945 | orchestrator | 2025-08-29 18:03:23.651955 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 18:03:23.651964 | orchestrator | Friday 29 August 2025 18:01:07 +0000 (0:00:02.453) 0:00:26.285 ********* 2025-08-29 18:03:23.651974 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.651983 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.651993 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.652002 | orchestrator | 2025-08-29 18:03:23.652011 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 18:03:23.652021 | orchestrator | Friday 29 August 2025 18:01:07 +0000 (0:00:00.382) 0:00:26.667 ********* 2025-08-29 18:03:23.652030 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 18:03:23.652040 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 18:03:23.652050 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 18:03:23.652059 | orchestrator | 2025-08-29 18:03:23.652069 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 18:03:23.652078 | orchestrator | Friday 29 August 2025 18:01:09 +0000 (0:00:01.990) 0:00:28.658 ********* 2025-08-29 18:03:23.652087 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:03:23.652097 | orchestrator | 2025-08-29 18:03:23.652106 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 18:03:23.652115 | orchestrator | Friday 29 August 2025 18:01:11 +0000 (0:00:01.546) 0:00:30.205 ********* 2025-08-29 18:03:23.652125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.652134 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.652144 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.652153 | orchestrator | 2025-08-29 18:03:23.652163 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 18:03:23.652178 | orchestrator | Friday 29 August 2025 18:01:11 +0000 (0:00:00.611) 0:00:30.816 ********* 2025-08-29 18:03:23.652188 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:03:23.652202 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 18:03:23.652212 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 18:03:23.652221 | orchestrator | 2025-08-29 18:03:23.652231 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 18:03:23.652240 | orchestrator | Friday 29 August 2025 18:01:12 +0000 (0:00:01.080) 0:00:31.897 ********* 2025-08-29 18:03:23.652250 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.652259 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.652284 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.652294 | orchestrator | 2025-08-29 18:03:23.652303 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 18:03:23.652313 | orchestrator | Friday 29 August 2025 18:01:13 +0000 (0:00:00.357) 0:00:32.254 ********* 2025-08-29 18:03:23.652322 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 18:03:23.652332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 18:03:23.652341 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 18:03:23.652350 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 18:03:23.652360 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 18:03:23.652370 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 18:03:23.652379 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 18:03:23.652389 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 18:03:23.652398 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 18:03:23.652407 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 18:03:23.652417 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 18:03:23.652426 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 18:03:23.652435 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 18:03:23.652445 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 18:03:23.652455 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 18:03:23.652464 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 18:03:23.652478 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 18:03:23.652488 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 18:03:23.652497 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 18:03:23.652507 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 18:03:23.652516 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 18:03:23.652525 | orchestrator | 2025-08-29 18:03:23.652535 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 18:03:23.652544 | orchestrator | Friday 29 August 2025 18:01:22 +0000 (0:00:09.197) 0:00:41.451 ********* 2025-08-29 18:03:23.652554 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 18:03:23.652570 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 18:03:23.652579 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 18:03:23.652588 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 18:03:23.652598 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 18:03:23.652607 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 18:03:23.652616 | orchestrator | 2025-08-29 18:03:23.652626 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 18:03:23.652635 | orchestrator | Friday 29 August 2025 18:01:25 +0000 (0:00:02.693) 0:00:44.144 ********* 2025-08-29 18:03:23.652653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.652664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.652679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 18:03:23.652695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.652706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.652722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 18:03:23.652732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.652742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.652752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 18:03:23.652762 | orchestrator | 2025-08-29 18:03:23.652775 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 18:03:23.652790 | orchestrator | Friday 29 August 2025 18:01:27 +0000 (0:00:02.352) 0:00:46.497 ********* 2025-08-29 18:03:23.652800 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.652810 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.652819 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.652829 | orchestrator | 2025-08-29 18:03:23.652838 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 18:03:23.652848 | orchestrator | Friday 29 August 2025 18:01:27 +0000 (0:00:00.313) 0:00:46.810 ********* 2025-08-29 18:03:23.652857 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.652866 | orchestrator | 2025-08-29 18:03:23.652876 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 18:03:23.652885 | orchestrator | Friday 29 August 2025 18:01:29 +0000 (0:00:02.117) 0:00:48.927 ********* 2025-08-29 18:03:23.652895 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.652904 | orchestrator | 2025-08-29 18:03:23.652913 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 18:03:23.652923 | orchestrator | Friday 29 August 2025 18:01:31 +0000 (0:00:02.091) 0:00:51.019 ********* 2025-08-29 18:03:23.652932 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.652942 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.652951 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.652961 | orchestrator | 2025-08-29 18:03:23.652971 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 18:03:23.652980 | orchestrator | Friday 29 August 2025 18:01:33 +0000 (0:00:01.491) 0:00:52.510 ********* 2025-08-29 18:03:23.652990 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.652999 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.653008 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.653018 | orchestrator | 2025-08-29 18:03:23.653027 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 18:03:23.653037 | orchestrator | Friday 29 August 2025 18:01:33 +0000 (0:00:00.395) 0:00:52.906 ********* 2025-08-29 18:03:23.653046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.653056 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.653065 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.653075 | orchestrator | 2025-08-29 18:03:23.653084 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 18:03:23.653094 | orchestrator | Friday 29 August 2025 18:01:34 +0000 (0:00:00.338) 0:00:53.245 ********* 2025-08-29 18:03:23.653103 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.653113 | orchestrator | 2025-08-29 18:03:23.653122 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 18:03:23.653131 | orchestrator | Friday 29 August 2025 18:01:47 +0000 (0:00:13.465) 0:01:06.711 ********* 2025-08-29 18:03:23.653141 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.653150 | orchestrator | 2025-08-29 18:03:23.653164 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 18:03:23.653174 | orchestrator | Friday 29 August 2025 18:01:57 +0000 (0:00:10.011) 0:01:16.722 ********* 2025-08-29 18:03:23.653184 | orchestrator | 2025-08-29 18:03:23.653194 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 18:03:23.653203 | orchestrator | Friday 29 August 2025 18:01:57 +0000 (0:00:00.065) 0:01:16.788 ********* 2025-08-29 18:03:23.653213 | orchestrator | 2025-08-29 18:03:23.653223 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 18:03:23.653232 | orchestrator | Friday 29 August 2025 18:01:58 +0000 (0:00:00.274) 0:01:17.063 ********* 2025-08-29 18:03:23.653242 | orchestrator | 2025-08-29 18:03:23.653252 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 18:03:23.653261 | orchestrator | Friday 29 August 2025 18:01:58 +0000 (0:00:00.074) 0:01:17.138 ********* 2025-08-29 18:03:23.653319 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.653330 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:03:23.653339 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:03:23.653355 | orchestrator | 2025-08-29 18:03:23.653364 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 18:03:23.653374 | orchestrator | Friday 29 August 2025 18:02:20 +0000 (0:00:22.444) 0:01:39.582 ********* 2025-08-29 18:03:23.653383 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.653393 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:03:23.653402 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:03:23.653412 | orchestrator | 2025-08-29 18:03:23.653421 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 18:03:23.653431 | orchestrator | Friday 29 August 2025 18:02:30 +0000 (0:00:10.302) 0:01:49.885 ********* 2025-08-29 18:03:23.653440 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.653449 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:03:23.653459 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:03:23.653468 | orchestrator | 2025-08-29 18:03:23.653478 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 18:03:23.653487 | orchestrator | Friday 29 August 2025 18:02:37 +0000 (0:00:06.735) 0:01:56.620 ********* 2025-08-29 18:03:23.653497 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:03:23.653506 | orchestrator | 2025-08-29 18:03:23.653516 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 18:03:23.653525 | orchestrator | Friday 29 August 2025 18:02:38 +0000 (0:00:00.934) 0:01:57.555 ********* 2025-08-29 18:03:23.653535 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.653544 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:03:23.653553 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:03:23.653563 | orchestrator | 2025-08-29 18:03:23.653573 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 18:03:23.653582 | orchestrator | Friday 29 August 2025 18:02:39 +0000 (0:00:00.851) 0:01:58.407 ********* 2025-08-29 18:03:23.653592 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:03:23.653601 | orchestrator | 2025-08-29 18:03:23.653610 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 18:03:23.653620 | orchestrator | Friday 29 August 2025 18:02:41 +0000 (0:00:01.792) 0:02:00.199 ********* 2025-08-29 18:03:23.653634 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 18:03:23.653644 | orchestrator | 2025-08-29 18:03:23.653653 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 18:03:23.653663 | orchestrator | Friday 29 August 2025 18:02:51 +0000 (0:00:09.903) 0:02:10.102 ********* 2025-08-29 18:03:23.653672 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 18:03:23.653681 | orchestrator | 2025-08-29 18:03:23.653691 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 18:03:23.653700 | orchestrator | Friday 29 August 2025 18:03:08 +0000 (0:00:17.656) 0:02:27.758 ********* 2025-08-29 18:03:23.653710 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 18:03:23.653719 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 18:03:23.653729 | orchestrator | 2025-08-29 18:03:23.653738 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 18:03:23.653747 | orchestrator | Friday 29 August 2025 18:03:14 +0000 (0:00:05.279) 0:02:33.038 ********* 2025-08-29 18:03:23.653757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.653766 | orchestrator | 2025-08-29 18:03:23.653776 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 18:03:23.653785 | orchestrator | Friday 29 August 2025 18:03:14 +0000 (0:00:00.291) 0:02:33.329 ********* 2025-08-29 18:03:23.653794 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.653804 | orchestrator | 2025-08-29 18:03:23.653813 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 18:03:23.653820 | orchestrator | Friday 29 August 2025 18:03:15 +0000 (0:00:01.005) 0:02:34.334 ********* 2025-08-29 18:03:23.653833 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.653840 | orchestrator | 2025-08-29 18:03:23.653848 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 18:03:23.653856 | orchestrator | Friday 29 August 2025 18:03:15 +0000 (0:00:00.458) 0:02:34.793 ********* 2025-08-29 18:03:23.653864 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.653871 | orchestrator | 2025-08-29 18:03:23.653879 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 18:03:23.653887 | orchestrator | Friday 29 August 2025 18:03:17 +0000 (0:00:01.336) 0:02:36.129 ********* 2025-08-29 18:03:23.653894 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:03:23.653902 | orchestrator | 2025-08-29 18:03:23.653910 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 18:03:23.653918 | orchestrator | Friday 29 August 2025 18:03:20 +0000 (0:00:03.073) 0:02:39.202 ********* 2025-08-29 18:03:23.653925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:03:23.653933 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:03:23.653941 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:03:23.653949 | orchestrator | 2025-08-29 18:03:23.653962 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:03:23.653970 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 18:03:23.653978 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 18:03:23.653987 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 18:03:23.653994 | orchestrator | 2025-08-29 18:03:23.654002 | orchestrator | 2025-08-29 18:03:23.654010 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:03:23.654041 | orchestrator | Friday 29 August 2025 18:03:20 +0000 (0:00:00.818) 0:02:40.021 ********* 2025-08-29 18:03:23.654049 | orchestrator | =============================================================================== 2025-08-29 18:03:23.654057 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 22.44s 2025-08-29 18:03:23.654064 | orchestrator | service-ks-register : keystone | Creating services --------------------- 17.66s 2025-08-29 18:03:23.654072 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.47s 2025-08-29 18:03:23.654080 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.30s 2025-08-29 18:03:23.654087 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.01s 2025-08-29 18:03:23.654095 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.90s 2025-08-29 18:03:23.654103 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.20s 2025-08-29 18:03:23.654111 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.74s 2025-08-29 18:03:23.654118 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.00s 2025-08-29 18:03:23.654126 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.28s 2025-08-29 18:03:23.654133 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.56s 2025-08-29 18:03:23.654141 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.40s 2025-08-29 18:03:23.654149 | orchestrator | keystone : Creating default user role ----------------------------------- 3.07s 2025-08-29 18:03:23.654156 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.69s 2025-08-29 18:03:23.654164 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.45s 2025-08-29 18:03:23.654172 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2025-08-29 18:03:23.654180 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.12s 2025-08-29 18:03:23.654196 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.09s 2025-08-29 18:03:23.654204 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.99s 2025-08-29 18:03:23.654212 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2025-08-29 18:03:23.654220 | orchestrator | 2025-08-29 18:03:23 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:23.654227 | orchestrator | 2025-08-29 18:03:23 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:23.654235 | orchestrator | 2025-08-29 18:03:23 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:23.654243 | orchestrator | 2025-08-29 18:03:23 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:23.654251 | orchestrator | 2025-08-29 18:03:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:26.673345 | orchestrator | 2025-08-29 18:03:26 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:26.673450 | orchestrator | 2025-08-29 18:03:26 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:26.674926 | orchestrator | 2025-08-29 18:03:26 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:26.675928 | orchestrator | 2025-08-29 18:03:26 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:26.676364 | orchestrator | 2025-08-29 18:03:26 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:26.676601 | orchestrator | 2025-08-29 18:03:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:29.714872 | orchestrator | 2025-08-29 18:03:29 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:29.715109 | orchestrator | 2025-08-29 18:03:29 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:29.716024 | orchestrator | 2025-08-29 18:03:29 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:29.716912 | orchestrator | 2025-08-29 18:03:29 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:29.717819 | orchestrator | 2025-08-29 18:03:29 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state STARTED 2025-08-29 18:03:29.717929 | orchestrator | 2025-08-29 18:03:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:32.767313 | orchestrator | 2025-08-29 18:03:32 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:32.767376 | orchestrator | 2025-08-29 18:03:32 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:32.767955 | orchestrator | 2025-08-29 18:03:32 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:32.768866 | orchestrator | 2025-08-29 18:03:32 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:32.769753 | orchestrator | 2025-08-29 18:03:32 | INFO  | Task 18fa9d61-2a90-4319-9eab-f93659e4dfcd is in state SUCCESS 2025-08-29 18:03:32.769888 | orchestrator | 2025-08-29 18:03:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:35.810532 | orchestrator | 2025-08-29 18:03:35 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:35.814242 | orchestrator | 2025-08-29 18:03:35 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:35.815398 | orchestrator | 2025-08-29 18:03:35 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:35.816537 | orchestrator | 2025-08-29 18:03:35 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:35.817706 | orchestrator | 2025-08-29 18:03:35 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:35.817732 | orchestrator | 2025-08-29 18:03:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:38.859451 | orchestrator | 2025-08-29 18:03:38 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:38.860593 | orchestrator | 2025-08-29 18:03:38 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:38.863013 | orchestrator | 2025-08-29 18:03:38 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:38.865765 | orchestrator | 2025-08-29 18:03:38 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:38.867143 | orchestrator | 2025-08-29 18:03:38 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:38.867391 | orchestrator | 2025-08-29 18:03:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:41.904586 | orchestrator | 2025-08-29 18:03:41 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:41.904702 | orchestrator | 2025-08-29 18:03:41 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:41.907926 | orchestrator | 2025-08-29 18:03:41 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:41.907997 | orchestrator | 2025-08-29 18:03:41 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:41.908570 | orchestrator | 2025-08-29 18:03:41 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:41.908592 | orchestrator | 2025-08-29 18:03:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:44.940524 | orchestrator | 2025-08-29 18:03:44 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:44.941251 | orchestrator | 2025-08-29 18:03:44 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:44.943473 | orchestrator | 2025-08-29 18:03:44 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:44.944344 | orchestrator | 2025-08-29 18:03:44 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:44.945390 | orchestrator | 2025-08-29 18:03:44 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:44.945427 | orchestrator | 2025-08-29 18:03:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:47.980576 | orchestrator | 2025-08-29 18:03:47 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:47.980970 | orchestrator | 2025-08-29 18:03:47 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:47.981795 | orchestrator | 2025-08-29 18:03:47 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:47.982625 | orchestrator | 2025-08-29 18:03:47 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:47.983638 | orchestrator | 2025-08-29 18:03:47 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:47.983669 | orchestrator | 2025-08-29 18:03:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:51.024733 | orchestrator | 2025-08-29 18:03:51 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:51.026378 | orchestrator | 2025-08-29 18:03:51 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:51.027572 | orchestrator | 2025-08-29 18:03:51 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:51.028629 | orchestrator | 2025-08-29 18:03:51 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:51.029655 | orchestrator | 2025-08-29 18:03:51 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:51.029682 | orchestrator | 2025-08-29 18:03:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:54.065582 | orchestrator | 2025-08-29 18:03:54 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:54.065687 | orchestrator | 2025-08-29 18:03:54 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:54.066216 | orchestrator | 2025-08-29 18:03:54 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:54.067109 | orchestrator | 2025-08-29 18:03:54 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:54.068084 | orchestrator | 2025-08-29 18:03:54 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:54.068220 | orchestrator | 2025-08-29 18:03:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:03:57.106992 | orchestrator | 2025-08-29 18:03:57 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:03:57.107607 | orchestrator | 2025-08-29 18:03:57 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:03:57.108612 | orchestrator | 2025-08-29 18:03:57 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:03:57.109639 | orchestrator | 2025-08-29 18:03:57 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:03:57.110419 | orchestrator | 2025-08-29 18:03:57 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:03:57.110500 | orchestrator | 2025-08-29 18:03:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:00.141408 | orchestrator | 2025-08-29 18:04:00 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:00.141749 | orchestrator | 2025-08-29 18:04:00 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:00.142736 | orchestrator | 2025-08-29 18:04:00 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state STARTED 2025-08-29 18:04:00.143581 | orchestrator | 2025-08-29 18:04:00 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:00.144758 | orchestrator | 2025-08-29 18:04:00 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:00.144791 | orchestrator | 2025-08-29 18:04:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:03.173652 | orchestrator | 2025-08-29 18:04:03 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:03.173929 | orchestrator | 2025-08-29 18:04:03 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:03.174519 | orchestrator | 2025-08-29 18:04:03 | INFO  | Task be18e01e-0d43-47cd-80bd-17fc80d2c852 is in state SUCCESS 2025-08-29 18:04:03.174783 | orchestrator | 2025-08-29 18:04:03.174807 | orchestrator | 2025-08-29 18:04:03.174819 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:04:03.174831 | orchestrator | 2025-08-29 18:04:03.174842 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:04:03.174882 | orchestrator | Friday 29 August 2025 18:02:55 +0000 (0:00:00.320) 0:00:00.320 ********* 2025-08-29 18:04:03.174894 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:04:03.174906 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:04:03.174917 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:04:03.174928 | orchestrator | ok: [testbed-manager] 2025-08-29 18:04:03.174938 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:04:03.174949 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:04:03.174962 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:04:03.174981 | orchestrator | 2025-08-29 18:04:03.175000 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:04:03.175018 | orchestrator | Friday 29 August 2025 18:02:56 +0000 (0:00:01.008) 0:00:01.329 ********* 2025-08-29 18:04:03.175037 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175056 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175074 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175090 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175107 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175126 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175144 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 18:04:03.175162 | orchestrator | 2025-08-29 18:04:03.175181 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 18:04:03.175197 | orchestrator | 2025-08-29 18:04:03.175214 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 18:04:03.175232 | orchestrator | Friday 29 August 2025 18:02:57 +0000 (0:00:01.429) 0:00:02.758 ********* 2025-08-29 18:04:03.175251 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:04:03.175403 | orchestrator | 2025-08-29 18:04:03.175426 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 18:04:03.175440 | orchestrator | Friday 29 August 2025 18:03:00 +0000 (0:00:02.335) 0:00:05.093 ********* 2025-08-29 18:04:03.175454 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-08-29 18:04:03.175466 | orchestrator | 2025-08-29 18:04:03.175479 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 18:04:03.175491 | orchestrator | Friday 29 August 2025 18:03:08 +0000 (0:00:08.836) 0:00:13.929 ********* 2025-08-29 18:04:03.175506 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 18:04:03.175521 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 18:04:03.175534 | orchestrator | 2025-08-29 18:04:03.175547 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 18:04:03.175560 | orchestrator | Friday 29 August 2025 18:03:14 +0000 (0:00:05.800) 0:00:19.730 ********* 2025-08-29 18:04:03.175573 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:04:03.175585 | orchestrator | 2025-08-29 18:04:03.175597 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 18:04:03.175610 | orchestrator | Friday 29 August 2025 18:03:17 +0000 (0:00:03.229) 0:00:22.959 ********* 2025-08-29 18:04:03.175622 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:04:03.175635 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-08-29 18:04:03.175647 | orchestrator | 2025-08-29 18:04:03.175660 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 18:04:03.175688 | orchestrator | Friday 29 August 2025 18:03:21 +0000 (0:00:03.570) 0:00:26.529 ********* 2025-08-29 18:04:03.175701 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:04:03.175714 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-08-29 18:04:03.175740 | orchestrator | 2025-08-29 18:04:03.175753 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 18:04:03.175765 | orchestrator | Friday 29 August 2025 18:03:26 +0000 (0:00:05.312) 0:00:31.841 ********* 2025-08-29 18:04:03.175778 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-08-29 18:04:03.175790 | orchestrator | 2025-08-29 18:04:03.175801 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:04:03.175812 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175823 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175834 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175845 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175856 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175884 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175932 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.175944 | orchestrator | 2025-08-29 18:04:03.175955 | orchestrator | 2025-08-29 18:04:03.175966 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:04:03.175977 | orchestrator | Friday 29 August 2025 18:03:31 +0000 (0:00:04.536) 0:00:36.378 ********* 2025-08-29 18:04:03.175988 | orchestrator | =============================================================================== 2025-08-29 18:04:03.175999 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 8.84s 2025-08-29 18:04:03.176010 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.80s 2025-08-29 18:04:03.176020 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.31s 2025-08-29 18:04:03.176031 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.54s 2025-08-29 18:04:03.176042 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.57s 2025-08-29 18:04:03.176053 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.23s 2025-08-29 18:04:03.176063 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.34s 2025-08-29 18:04:03.176074 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.43s 2025-08-29 18:04:03.176085 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.01s 2025-08-29 18:04:03.176095 | orchestrator | 2025-08-29 18:04:03.176106 | orchestrator | 2025-08-29 18:04:03.176120 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 18:04:03.176139 | orchestrator | 2025-08-29 18:04:03.176158 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 18:04:03.176175 | orchestrator | Friday 29 August 2025 18:02:47 +0000 (0:00:00.318) 0:00:00.318 ********* 2025-08-29 18:04:03.176234 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176257 | orchestrator | 2025-08-29 18:04:03.176304 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 18:04:03.176317 | orchestrator | Friday 29 August 2025 18:02:48 +0000 (0:00:01.810) 0:00:02.128 ********* 2025-08-29 18:04:03.176327 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176338 | orchestrator | 2025-08-29 18:04:03.176349 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 18:04:03.176360 | orchestrator | Friday 29 August 2025 18:02:49 +0000 (0:00:01.122) 0:00:03.250 ********* 2025-08-29 18:04:03.176381 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176391 | orchestrator | 2025-08-29 18:04:03.176402 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 18:04:03.176413 | orchestrator | Friday 29 August 2025 18:02:51 +0000 (0:00:01.205) 0:00:04.455 ********* 2025-08-29 18:04:03.176423 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176434 | orchestrator | 2025-08-29 18:04:03.176445 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 18:04:03.176456 | orchestrator | Friday 29 August 2025 18:02:52 +0000 (0:00:01.793) 0:00:06.249 ********* 2025-08-29 18:04:03.176467 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176477 | orchestrator | 2025-08-29 18:04:03.176488 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 18:04:03.176499 | orchestrator | Friday 29 August 2025 18:02:54 +0000 (0:00:01.254) 0:00:07.504 ********* 2025-08-29 18:04:03.176510 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176520 | orchestrator | 2025-08-29 18:04:03.176531 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 18:04:03.176541 | orchestrator | Friday 29 August 2025 18:02:55 +0000 (0:00:01.120) 0:00:08.624 ********* 2025-08-29 18:04:03.176552 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176563 | orchestrator | 2025-08-29 18:04:03.176574 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 18:04:03.176584 | orchestrator | Friday 29 August 2025 18:02:57 +0000 (0:00:02.086) 0:00:10.711 ********* 2025-08-29 18:04:03.176603 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176614 | orchestrator | 2025-08-29 18:04:03.176624 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 18:04:03.176635 | orchestrator | Friday 29 August 2025 18:02:58 +0000 (0:00:01.418) 0:00:12.130 ********* 2025-08-29 18:04:03.176650 | orchestrator | changed: [testbed-manager] 2025-08-29 18:04:03.176668 | orchestrator | 2025-08-29 18:04:03.176686 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 18:04:03.176704 | orchestrator | Friday 29 August 2025 18:03:35 +0000 (0:00:36.903) 0:00:49.033 ********* 2025-08-29 18:04:03.176722 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:04:03.176739 | orchestrator | 2025-08-29 18:04:03.176756 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 18:04:03.176772 | orchestrator | 2025-08-29 18:04:03.176790 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 18:04:03.176808 | orchestrator | Friday 29 August 2025 18:03:35 +0000 (0:00:00.178) 0:00:49.212 ********* 2025-08-29 18:04:03.176827 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:04:03.176845 | orchestrator | 2025-08-29 18:04:03.176864 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 18:04:03.176882 | orchestrator | 2025-08-29 18:04:03.176900 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 18:04:03.176920 | orchestrator | Friday 29 August 2025 18:03:37 +0000 (0:00:01.514) 0:00:50.727 ********* 2025-08-29 18:04:03.176939 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:04:03.176956 | orchestrator | 2025-08-29 18:04:03.176974 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 18:04:03.176986 | orchestrator | 2025-08-29 18:04:03.176996 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 18:04:03.177007 | orchestrator | Friday 29 August 2025 18:03:48 +0000 (0:00:11.312) 0:01:02.039 ********* 2025-08-29 18:04:03.177018 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:04:03.177029 | orchestrator | 2025-08-29 18:04:03.177053 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:04:03.177064 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 18:04:03.177075 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.177096 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.177108 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:04:03.177118 | orchestrator | 2025-08-29 18:04:03.177129 | orchestrator | 2025-08-29 18:04:03.177140 | orchestrator | 2025-08-29 18:04:03.177150 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:04:03.177161 | orchestrator | Friday 29 August 2025 18:03:59 +0000 (0:00:11.131) 0:01:13.171 ********* 2025-08-29 18:04:03.177172 | orchestrator | =============================================================================== 2025-08-29 18:04:03.177183 | orchestrator | Create admin user ------------------------------------------------------ 36.90s 2025-08-29 18:04:03.177193 | orchestrator | Restart ceph manager service ------------------------------------------- 23.96s 2025-08-29 18:04:03.177204 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2025-08-29 18:04:03.177215 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.81s 2025-08-29 18:04:03.177225 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.79s 2025-08-29 18:04:03.177236 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.42s 2025-08-29 18:04:03.177247 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.25s 2025-08-29 18:04:03.177257 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.21s 2025-08-29 18:04:03.177287 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2025-08-29 18:04:03.177299 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.12s 2025-08-29 18:04:03.177309 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-08-29 18:04:03.177443 | orchestrator | 2025-08-29 18:04:03 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:03.177458 | orchestrator | 2025-08-29 18:04:03 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:03.177469 | orchestrator | 2025-08-29 18:04:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:06.212571 | orchestrator | 2025-08-29 18:04:06 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:06.214124 | orchestrator | 2025-08-29 18:04:06 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:06.217978 | orchestrator | 2025-08-29 18:04:06 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:06.219713 | orchestrator | 2025-08-29 18:04:06 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:06.219736 | orchestrator | 2025-08-29 18:04:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:09.249794 | orchestrator | 2025-08-29 18:04:09 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:09.249922 | orchestrator | 2025-08-29 18:04:09 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:09.249944 | orchestrator | 2025-08-29 18:04:09 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:09.249963 | orchestrator | 2025-08-29 18:04:09 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:09.249983 | orchestrator | 2025-08-29 18:04:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:12.280870 | orchestrator | 2025-08-29 18:04:12 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:12.281493 | orchestrator | 2025-08-29 18:04:12 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:12.283536 | orchestrator | 2025-08-29 18:04:12 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:12.284673 | orchestrator | 2025-08-29 18:04:12 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:12.285307 | orchestrator | 2025-08-29 18:04:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:15.330000 | orchestrator | 2025-08-29 18:04:15 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:15.330182 | orchestrator | 2025-08-29 18:04:15 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:15.332995 | orchestrator | 2025-08-29 18:04:15 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:15.334088 | orchestrator | 2025-08-29 18:04:15 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:15.334152 | orchestrator | 2025-08-29 18:04:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:18.355402 | orchestrator | 2025-08-29 18:04:18 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:18.355506 | orchestrator | 2025-08-29 18:04:18 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:18.357525 | orchestrator | 2025-08-29 18:04:18 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:18.357548 | orchestrator | 2025-08-29 18:04:18 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:18.357559 | orchestrator | 2025-08-29 18:04:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:21.385110 | orchestrator | 2025-08-29 18:04:21 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:21.385224 | orchestrator | 2025-08-29 18:04:21 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:21.386144 | orchestrator | 2025-08-29 18:04:21 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:21.386792 | orchestrator | 2025-08-29 18:04:21 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:21.386825 | orchestrator | 2025-08-29 18:04:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:24.416797 | orchestrator | 2025-08-29 18:04:24 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:24.419140 | orchestrator | 2025-08-29 18:04:24 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:24.419903 | orchestrator | 2025-08-29 18:04:24 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:24.420691 | orchestrator | 2025-08-29 18:04:24 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:24.420740 | orchestrator | 2025-08-29 18:04:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:27.441873 | orchestrator | 2025-08-29 18:04:27 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:27.441985 | orchestrator | 2025-08-29 18:04:27 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:27.442851 | orchestrator | 2025-08-29 18:04:27 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:27.443300 | orchestrator | 2025-08-29 18:04:27 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:27.443363 | orchestrator | 2025-08-29 18:04:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:30.468179 | orchestrator | 2025-08-29 18:04:30 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:30.468265 | orchestrator | 2025-08-29 18:04:30 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:30.468514 | orchestrator | 2025-08-29 18:04:30 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:30.469209 | orchestrator | 2025-08-29 18:04:30 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:30.469363 | orchestrator | 2025-08-29 18:04:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:33.495122 | orchestrator | 2025-08-29 18:04:33 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:33.495687 | orchestrator | 2025-08-29 18:04:33 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:33.496482 | orchestrator | 2025-08-29 18:04:33 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:33.496853 | orchestrator | 2025-08-29 18:04:33 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:33.496955 | orchestrator | 2025-08-29 18:04:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:36.533225 | orchestrator | 2025-08-29 18:04:36 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:36.533680 | orchestrator | 2025-08-29 18:04:36 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:36.535938 | orchestrator | 2025-08-29 18:04:36 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:36.536579 | orchestrator | 2025-08-29 18:04:36 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:36.536610 | orchestrator | 2025-08-29 18:04:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:39.573623 | orchestrator | 2025-08-29 18:04:39 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:39.576361 | orchestrator | 2025-08-29 18:04:39 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:39.579518 | orchestrator | 2025-08-29 18:04:39 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:39.580407 | orchestrator | 2025-08-29 18:04:39 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:39.580430 | orchestrator | 2025-08-29 18:04:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:42.618848 | orchestrator | 2025-08-29 18:04:42 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:42.618998 | orchestrator | 2025-08-29 18:04:42 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:42.619015 | orchestrator | 2025-08-29 18:04:42 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:42.619104 | orchestrator | 2025-08-29 18:04:42 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:42.619120 | orchestrator | 2025-08-29 18:04:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:45.652108 | orchestrator | 2025-08-29 18:04:45 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:45.652340 | orchestrator | 2025-08-29 18:04:45 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:45.652957 | orchestrator | 2025-08-29 18:04:45 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:45.653488 | orchestrator | 2025-08-29 18:04:45 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:45.653511 | orchestrator | 2025-08-29 18:04:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:48.689935 | orchestrator | 2025-08-29 18:04:48 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:48.690229 | orchestrator | 2025-08-29 18:04:48 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:48.691000 | orchestrator | 2025-08-29 18:04:48 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:48.691822 | orchestrator | 2025-08-29 18:04:48 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:48.691846 | orchestrator | 2025-08-29 18:04:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:51.743819 | orchestrator | 2025-08-29 18:04:51 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:51.743934 | orchestrator | 2025-08-29 18:04:51 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:51.746464 | orchestrator | 2025-08-29 18:04:51 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:51.748604 | orchestrator | 2025-08-29 18:04:51 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:51.748619 | orchestrator | 2025-08-29 18:04:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:54.821302 | orchestrator | 2025-08-29 18:04:54 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:54.823839 | orchestrator | 2025-08-29 18:04:54 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:54.825846 | orchestrator | 2025-08-29 18:04:54 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:54.827905 | orchestrator | 2025-08-29 18:04:54 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:54.828097 | orchestrator | 2025-08-29 18:04:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:04:57.869838 | orchestrator | 2025-08-29 18:04:57 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:04:57.870258 | orchestrator | 2025-08-29 18:04:57 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:04:57.871433 | orchestrator | 2025-08-29 18:04:57 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:04:57.873420 | orchestrator | 2025-08-29 18:04:57 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:04:57.873446 | orchestrator | 2025-08-29 18:04:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:00.910007 | orchestrator | 2025-08-29 18:05:00 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:00.910314 | orchestrator | 2025-08-29 18:05:00 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:00.910933 | orchestrator | 2025-08-29 18:05:00 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:00.913401 | orchestrator | 2025-08-29 18:05:00 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:00.913424 | orchestrator | 2025-08-29 18:05:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:03.951586 | orchestrator | 2025-08-29 18:05:03 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:03.951914 | orchestrator | 2025-08-29 18:05:03 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:03.952825 | orchestrator | 2025-08-29 18:05:03 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:03.954425 | orchestrator | 2025-08-29 18:05:03 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:03.954507 | orchestrator | 2025-08-29 18:05:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:06.999803 | orchestrator | 2025-08-29 18:05:06 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:07.000975 | orchestrator | 2025-08-29 18:05:07 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:07.002645 | orchestrator | 2025-08-29 18:05:07 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:07.003793 | orchestrator | 2025-08-29 18:05:07 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:07.003825 | orchestrator | 2025-08-29 18:05:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:10.058188 | orchestrator | 2025-08-29 18:05:10 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:10.060632 | orchestrator | 2025-08-29 18:05:10 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:10.060724 | orchestrator | 2025-08-29 18:05:10 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:10.062239 | orchestrator | 2025-08-29 18:05:10 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:10.062400 | orchestrator | 2025-08-29 18:05:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:13.108528 | orchestrator | 2025-08-29 18:05:13 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:13.110300 | orchestrator | 2025-08-29 18:05:13 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:13.112976 | orchestrator | 2025-08-29 18:05:13 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:13.115068 | orchestrator | 2025-08-29 18:05:13 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:13.115101 | orchestrator | 2025-08-29 18:05:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:16.162413 | orchestrator | 2025-08-29 18:05:16 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:16.163917 | orchestrator | 2025-08-29 18:05:16 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:16.165893 | orchestrator | 2025-08-29 18:05:16 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:16.167196 | orchestrator | 2025-08-29 18:05:16 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:16.167222 | orchestrator | 2025-08-29 18:05:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:19.213402 | orchestrator | 2025-08-29 18:05:19 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:19.213833 | orchestrator | 2025-08-29 18:05:19 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:19.214979 | orchestrator | 2025-08-29 18:05:19 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:19.216194 | orchestrator | 2025-08-29 18:05:19 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:19.216215 | orchestrator | 2025-08-29 18:05:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:22.252273 | orchestrator | 2025-08-29 18:05:22 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:22.253421 | orchestrator | 2025-08-29 18:05:22 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:22.254099 | orchestrator | 2025-08-29 18:05:22 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:22.254921 | orchestrator | 2025-08-29 18:05:22 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:22.255694 | orchestrator | 2025-08-29 18:05:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:25.299441 | orchestrator | 2025-08-29 18:05:25 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:25.302115 | orchestrator | 2025-08-29 18:05:25 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:25.303723 | orchestrator | 2025-08-29 18:05:25 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:25.305411 | orchestrator | 2025-08-29 18:05:25 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:25.306183 | orchestrator | 2025-08-29 18:05:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:28.342256 | orchestrator | 2025-08-29 18:05:28 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:28.342917 | orchestrator | 2025-08-29 18:05:28 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:28.344362 | orchestrator | 2025-08-29 18:05:28 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:28.345178 | orchestrator | 2025-08-29 18:05:28 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:28.345199 | orchestrator | 2025-08-29 18:05:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:31.388602 | orchestrator | 2025-08-29 18:05:31 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:31.388712 | orchestrator | 2025-08-29 18:05:31 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:31.389485 | orchestrator | 2025-08-29 18:05:31 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:31.390574 | orchestrator | 2025-08-29 18:05:31 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:31.390795 | orchestrator | 2025-08-29 18:05:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:34.440132 | orchestrator | 2025-08-29 18:05:34 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:34.440674 | orchestrator | 2025-08-29 18:05:34 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:34.441465 | orchestrator | 2025-08-29 18:05:34 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:34.442591 | orchestrator | 2025-08-29 18:05:34 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:34.442636 | orchestrator | 2025-08-29 18:05:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:37.489553 | orchestrator | 2025-08-29 18:05:37 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:37.491660 | orchestrator | 2025-08-29 18:05:37 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:37.492472 | orchestrator | 2025-08-29 18:05:37 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:37.494843 | orchestrator | 2025-08-29 18:05:37 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:37.494938 | orchestrator | 2025-08-29 18:05:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:40.536701 | orchestrator | 2025-08-29 18:05:40 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:40.536884 | orchestrator | 2025-08-29 18:05:40 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:40.537457 | orchestrator | 2025-08-29 18:05:40 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:40.538178 | orchestrator | 2025-08-29 18:05:40 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:40.538203 | orchestrator | 2025-08-29 18:05:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:43.586565 | orchestrator | 2025-08-29 18:05:43 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:43.587157 | orchestrator | 2025-08-29 18:05:43 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:43.587414 | orchestrator | 2025-08-29 18:05:43 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:43.588165 | orchestrator | 2025-08-29 18:05:43 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:43.588187 | orchestrator | 2025-08-29 18:05:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:46.629571 | orchestrator | 2025-08-29 18:05:46 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:46.631214 | orchestrator | 2025-08-29 18:05:46 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:46.632670 | orchestrator | 2025-08-29 18:05:46 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:46.634184 | orchestrator | 2025-08-29 18:05:46 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:46.634211 | orchestrator | 2025-08-29 18:05:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:49.682891 | orchestrator | 2025-08-29 18:05:49 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:49.682998 | orchestrator | 2025-08-29 18:05:49 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:49.683013 | orchestrator | 2025-08-29 18:05:49 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:49.683024 | orchestrator | 2025-08-29 18:05:49 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:49.683036 | orchestrator | 2025-08-29 18:05:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:52.735544 | orchestrator | 2025-08-29 18:05:52 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:52.735745 | orchestrator | 2025-08-29 18:05:52 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:52.736407 | orchestrator | 2025-08-29 18:05:52 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:52.737063 | orchestrator | 2025-08-29 18:05:52 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:52.737086 | orchestrator | 2025-08-29 18:05:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:55.780204 | orchestrator | 2025-08-29 18:05:55 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:55.781012 | orchestrator | 2025-08-29 18:05:55 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:55.781962 | orchestrator | 2025-08-29 18:05:55 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:55.783225 | orchestrator | 2025-08-29 18:05:55 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:55.783252 | orchestrator | 2025-08-29 18:05:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:05:58.825163 | orchestrator | 2025-08-29 18:05:58 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:05:58.829554 | orchestrator | 2025-08-29 18:05:58 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:05:58.830530 | orchestrator | 2025-08-29 18:05:58 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:05:58.831415 | orchestrator | 2025-08-29 18:05:58 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:05:58.831700 | orchestrator | 2025-08-29 18:05:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:01.883461 | orchestrator | 2025-08-29 18:06:01 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:01.884758 | orchestrator | 2025-08-29 18:06:01 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:01.887941 | orchestrator | 2025-08-29 18:06:01 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:06:01.889448 | orchestrator | 2025-08-29 18:06:01 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:01.889811 | orchestrator | 2025-08-29 18:06:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:04.937691 | orchestrator | 2025-08-29 18:06:04 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:04.937875 | orchestrator | 2025-08-29 18:06:04 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:04.939109 | orchestrator | 2025-08-29 18:06:04 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:06:04.940177 | orchestrator | 2025-08-29 18:06:04 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:04.940203 | orchestrator | 2025-08-29 18:06:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:07.983518 | orchestrator | 2025-08-29 18:06:07 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:07.985557 | orchestrator | 2025-08-29 18:06:07 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:07.985783 | orchestrator | 2025-08-29 18:06:07 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state STARTED 2025-08-29 18:06:07.986628 | orchestrator | 2025-08-29 18:06:07 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:07.986652 | orchestrator | 2025-08-29 18:06:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:11.036510 | orchestrator | 2025-08-29 18:06:11 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:11.038688 | orchestrator | 2025-08-29 18:06:11 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:11.038752 | orchestrator | 2025-08-29 18:06:11 | INFO  | Task 8f8b70de-d243-4743-9e86-3364bf481a64 is in state SUCCESS 2025-08-29 18:06:11.038956 | orchestrator | 2025-08-29 18:06:11.040478 | orchestrator | 2025-08-29 18:06:11.040510 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:06:11.040523 | orchestrator | 2025-08-29 18:06:11.040560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:06:11.040574 | orchestrator | Friday 29 August 2025 18:02:55 +0000 (0:00:00.347) 0:00:00.347 ********* 2025-08-29 18:06:11.040585 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:06:11.040598 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:06:11.040610 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:06:11.040622 | orchestrator | 2025-08-29 18:06:11.040634 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:06:11.040647 | orchestrator | Friday 29 August 2025 18:02:55 +0000 (0:00:00.361) 0:00:00.708 ********* 2025-08-29 18:06:11.040655 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 18:06:11.040663 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 18:06:11.040670 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 18:06:11.040677 | orchestrator | 2025-08-29 18:06:11.040685 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 18:06:11.040692 | orchestrator | 2025-08-29 18:06:11.040699 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 18:06:11.040707 | orchestrator | Friday 29 August 2025 18:02:56 +0000 (0:00:00.597) 0:00:01.306 ********* 2025-08-29 18:06:11.040714 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:06:11.040722 | orchestrator | 2025-08-29 18:06:11.040729 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 18:06:11.040736 | orchestrator | Friday 29 August 2025 18:02:57 +0000 (0:00:01.061) 0:00:02.367 ********* 2025-08-29 18:06:11.040755 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 18:06:11.040762 | orchestrator | 2025-08-29 18:06:11.040769 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 18:06:11.040776 | orchestrator | Friday 29 August 2025 18:03:07 +0000 (0:00:09.911) 0:00:12.279 ********* 2025-08-29 18:06:11.040784 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 18:06:11.040792 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 18:06:11.040799 | orchestrator | 2025-08-29 18:06:11.040806 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 18:06:11.040813 | orchestrator | Friday 29 August 2025 18:03:12 +0000 (0:00:05.217) 0:00:17.496 ********* 2025-08-29 18:06:11.040820 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 18:06:11.040827 | orchestrator | 2025-08-29 18:06:11.040834 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 18:06:11.040841 | orchestrator | Friday 29 August 2025 18:03:15 +0000 (0:00:03.338) 0:00:20.835 ********* 2025-08-29 18:06:11.040849 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:06:11.040857 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 18:06:11.040864 | orchestrator | 2025-08-29 18:06:11.040871 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 18:06:11.040878 | orchestrator | Friday 29 August 2025 18:03:20 +0000 (0:00:04.079) 0:00:24.914 ********* 2025-08-29 18:06:11.040886 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:06:11.040893 | orchestrator | 2025-08-29 18:06:11.040900 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 18:06:11.040907 | orchestrator | Friday 29 August 2025 18:03:23 +0000 (0:00:03.261) 0:00:28.175 ********* 2025-08-29 18:06:11.040914 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 18:06:11.040921 | orchestrator | 2025-08-29 18:06:11.040928 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 18:06:11.040935 | orchestrator | Friday 29 August 2025 18:03:26 +0000 (0:00:03.429) 0:00:31.604 ********* 2025-08-29 18:06:11.040960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041045 | orchestrator | 2025-08-29 18:06:11.041052 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 18:06:11.041060 | orchestrator | Friday 29 August 2025 18:03:31 +0000 (0:00:04.401) 0:00:36.006 ********* 2025-08-29 18:06:11.041072 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:06:11.041108 | orchestrator | 2025-08-29 18:06:11.041116 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 18:06:11.041123 | orchestrator | Friday 29 August 2025 18:03:31 +0000 (0:00:00.752) 0:00:36.758 ********* 2025-08-29 18:06:11.041131 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.041138 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:11.041146 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:11.041153 | orchestrator | 2025-08-29 18:06:11.041160 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 18:06:11.041167 | orchestrator | Friday 29 August 2025 18:03:36 +0000 (0:00:04.446) 0:00:41.205 ********* 2025-08-29 18:06:11.041174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:06:11.041182 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:06:11.041189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:06:11.041196 | orchestrator | 2025-08-29 18:06:11.041203 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 18:06:11.041210 | orchestrator | Friday 29 August 2025 18:03:37 +0000 (0:00:01.548) 0:00:42.753 ********* 2025-08-29 18:06:11.041217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:06:11.041225 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:06:11.041237 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:06:11.041244 | orchestrator | 2025-08-29 18:06:11.041251 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 18:06:11.041258 | orchestrator | Friday 29 August 2025 18:03:39 +0000 (0:00:01.242) 0:00:43.996 ********* 2025-08-29 18:06:11.041265 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:06:11.041272 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:06:11.041279 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:06:11.041324 | orchestrator | 2025-08-29 18:06:11.041332 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 18:06:11.041339 | orchestrator | Friday 29 August 2025 18:03:40 +0000 (0:00:00.905) 0:00:44.902 ********* 2025-08-29 18:06:11.041346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.041353 | orchestrator | 2025-08-29 18:06:11.041360 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 18:06:11.041373 | orchestrator | Friday 29 August 2025 18:03:40 +0000 (0:00:00.149) 0:00:45.052 ********* 2025-08-29 18:06:11.041381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.041388 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.041395 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.041402 | orchestrator | 2025-08-29 18:06:11.041410 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 18:06:11.041417 | orchestrator | Friday 29 August 2025 18:03:40 +0000 (0:00:00.351) 0:00:45.403 ********* 2025-08-29 18:06:11.041424 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:06:11.041431 | orchestrator | 2025-08-29 18:06:11.041438 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 18:06:11.041445 | orchestrator | Friday 29 August 2025 18:03:41 +0000 (0:00:00.533) 0:00:45.937 ********* 2025-08-29 18:06:11.041459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041495 | orchestrator | 2025-08-29 18:06:11.041502 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 18:06:11.041509 | orchestrator | Friday 29 August 2025 18:03:46 +0000 (0:00:05.048) 0:00:50.985 ********* 2025-08-29 18:06:11.041527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 18:06:11.041541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.041549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 18:06:11.041557 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.041572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 18:06:11.041580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.041587 | orchestrator | 2025-08-29 18:06:11.041594 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 18:06:11.041602 | orchestrator | Friday 29 August 2025 18:03:50 +0000 (0:00:04.804) 0:00:55.790 ********* 2025-08-29 18:06:11.041613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 18:06:11.041625 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.041638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 18:06:11.041646 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.041657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 18:06:11.041676 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.041683 | orchestrator | 2025-08-29 18:06:11.041691 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 18:06:11.041698 | orchestrator | Friday 29 August 2025 18:03:55 +0000 (0:00:04.358) 0:01:00.149 ********* 2025-08-29 18:06:11.041705 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.041713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.041720 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.041727 | orchestrator | 2025-08-29 18:06:11.041734 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 18:06:11.041741 | orchestrator | Friday 29 August 2025 18:04:01 +0000 (0:00:05.904) 0:01:06.053 ********* 2025-08-29 18:06:11.041754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.041788 | orchestrator | 2025-08-29 18:06:11.041796 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 18:06:11.041803 | orchestrator | Friday 29 August 2025 18:04:06 +0000 (0:00:04.907) 0:01:10.960 ********* 2025-08-29 18:06:11.041810 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.041821 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:11.041832 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:11.041843 | orchestrator | 2025-08-29 18:06:11.041854 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 18:06:11.042070 | orchestrator | Friday 29 August 2025 18:04:14 +0000 (0:00:08.667) 0:01:19.628 ********* 2025-08-29 18:06:11.042089 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042097 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042104 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042119 | orchestrator | 2025-08-29 18:06:11.042127 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 18:06:11.042134 | orchestrator | Friday 29 August 2025 18:04:18 +0000 (0:00:04.095) 0:01:23.723 ********* 2025-08-29 18:06:11.042141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042149 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042156 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042163 | orchestrator | 2025-08-29 18:06:11.042170 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 18:06:11.042177 | orchestrator | Friday 29 August 2025 18:04:25 +0000 (0:00:06.658) 0:01:30.382 ********* 2025-08-29 18:06:11.042185 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042192 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042199 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042206 | orchestrator | 2025-08-29 18:06:11.042213 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 18:06:11.042221 | orchestrator | Friday 29 August 2025 18:04:30 +0000 (0:00:05.128) 0:01:35.510 ********* 2025-08-29 18:06:11.042228 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042242 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042249 | orchestrator | 2025-08-29 18:06:11.042256 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 18:06:11.042263 | orchestrator | Friday 29 August 2025 18:04:36 +0000 (0:00:06.138) 0:01:41.649 ********* 2025-08-29 18:06:11.042270 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042277 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042315 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042323 | orchestrator | 2025-08-29 18:06:11.042330 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 18:06:11.042337 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.845) 0:01:42.495 ********* 2025-08-29 18:06:11.042345 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 18:06:11.042352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042359 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 18:06:11.042366 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042374 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 18:06:11.042381 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042388 | orchestrator | 2025-08-29 18:06:11.042395 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 18:06:11.042402 | orchestrator | Friday 29 August 2025 18:04:44 +0000 (0:00:06.728) 0:01:49.223 ********* 2025-08-29 18:06:11.042411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.042437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.042446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 18:06:11.042454 | orchestrator | 2025-08-29 18:06:11.042466 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 18:06:11.042473 | orchestrator | Friday 29 August 2025 18:04:50 +0000 (0:00:06.050) 0:01:55.274 ********* 2025-08-29 18:06:11.042480 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:11.042488 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:11.042495 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:11.042502 | orchestrator | 2025-08-29 18:06:11.042509 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 18:06:11.042516 | orchestrator | Friday 29 August 2025 18:04:50 +0000 (0:00:00.477) 0:01:55.752 ********* 2025-08-29 18:06:11.042523 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.042530 | orchestrator | 2025-08-29 18:06:11.042538 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 18:06:11.042545 | orchestrator | Friday 29 August 2025 18:04:53 +0000 (0:00:02.251) 0:01:58.003 ********* 2025-08-29 18:06:11.042552 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.042559 | orchestrator | 2025-08-29 18:06:11.042567 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 18:06:11.042574 | orchestrator | Friday 29 August 2025 18:04:55 +0000 (0:00:02.093) 0:02:00.097 ********* 2025-08-29 18:06:11.042581 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.042588 | orchestrator | 2025-08-29 18:06:11.042595 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 18:06:11.042606 | orchestrator | Friday 29 August 2025 18:04:57 +0000 (0:00:02.036) 0:02:02.134 ********* 2025-08-29 18:06:11.042613 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.042621 | orchestrator | 2025-08-29 18:06:11.042628 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 18:06:11.042635 | orchestrator | Friday 29 August 2025 18:05:27 +0000 (0:00:29.824) 0:02:31.959 ********* 2025-08-29 18:06:11.042642 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.042649 | orchestrator | 2025-08-29 18:06:11.042657 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 18:06:11.042664 | orchestrator | Friday 29 August 2025 18:05:29 +0000 (0:00:02.270) 0:02:34.229 ********* 2025-08-29 18:06:11.042673 | orchestrator | 2025-08-29 18:06:11.042682 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 18:06:11.042690 | orchestrator | Friday 29 August 2025 18:05:29 +0000 (0:00:00.612) 0:02:34.842 ********* 2025-08-29 18:06:11.042698 | orchestrator | 2025-08-29 18:06:11.042707 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 18:06:11.042715 | orchestrator | Friday 29 August 2025 18:05:30 +0000 (0:00:00.239) 0:02:35.081 ********* 2025-08-29 18:06:11.042723 | orchestrator | 2025-08-29 18:06:11.042731 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 18:06:11.042740 | orchestrator | Friday 29 August 2025 18:05:30 +0000 (0:00:00.195) 0:02:35.276 ********* 2025-08-29 18:06:11.042748 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:11.042756 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:11.042765 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:11.042773 | orchestrator | 2025-08-29 18:06:11.042781 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:06:11.042794 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 18:06:11.042804 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 18:06:11.042813 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 18:06:11.042821 | orchestrator | 2025-08-29 18:06:11.042829 | orchestrator | 2025-08-29 18:06:11.042837 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:06:11.042853 | orchestrator | Friday 29 August 2025 18:06:08 +0000 (0:00:37.959) 0:03:13.236 ********* 2025-08-29 18:06:11.042862 | orchestrator | =============================================================================== 2025-08-29 18:06:11.042871 | orchestrator | glance : Restart glance-api container ---------------------------------- 37.96s 2025-08-29 18:06:11.042879 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.82s 2025-08-29 18:06:11.042887 | orchestrator | service-ks-register : glance | Creating services ------------------------ 9.91s 2025-08-29 18:06:11.042895 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.67s 2025-08-29 18:06:11.042904 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.73s 2025-08-29 18:06:11.042912 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.66s 2025-08-29 18:06:11.042920 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.14s 2025-08-29 18:06:11.042929 | orchestrator | glance : Check glance containers ---------------------------------------- 6.05s 2025-08-29 18:06:11.042937 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.90s 2025-08-29 18:06:11.042945 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.22s 2025-08-29 18:06:11.042953 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.13s 2025-08-29 18:06:11.042962 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.05s 2025-08-29 18:06:11.042970 | orchestrator | glance : Copying over config.json files for services -------------------- 4.91s 2025-08-29 18:06:11.042979 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.80s 2025-08-29 18:06:11.042987 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.45s 2025-08-29 18:06:11.042996 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.40s 2025-08-29 18:06:11.043004 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.36s 2025-08-29 18:06:11.043013 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.10s 2025-08-29 18:06:11.043021 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.08s 2025-08-29 18:06:11.043028 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.43s 2025-08-29 18:06:11.043597 | orchestrator | 2025-08-29 18:06:11 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:11.045130 | orchestrator | 2025-08-29 18:06:11 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:11.045244 | orchestrator | 2025-08-29 18:06:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:14.108981 | orchestrator | 2025-08-29 18:06:14 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:14.109437 | orchestrator | 2025-08-29 18:06:14 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:14.111068 | orchestrator | 2025-08-29 18:06:14 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:14.112408 | orchestrator | 2025-08-29 18:06:14 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:14.112429 | orchestrator | 2025-08-29 18:06:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:17.176153 | orchestrator | 2025-08-29 18:06:17 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:17.176949 | orchestrator | 2025-08-29 18:06:17 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:17.178504 | orchestrator | 2025-08-29 18:06:17 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:17.179066 | orchestrator | 2025-08-29 18:06:17 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:17.179116 | orchestrator | 2025-08-29 18:06:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:20.225082 | orchestrator | 2025-08-29 18:06:20 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:20.228984 | orchestrator | 2025-08-29 18:06:20 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:20.230938 | orchestrator | 2025-08-29 18:06:20 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:20.234986 | orchestrator | 2025-08-29 18:06:20 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:20.235662 | orchestrator | 2025-08-29 18:06:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:23.272004 | orchestrator | 2025-08-29 18:06:23 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state STARTED 2025-08-29 18:06:23.279660 | orchestrator | 2025-08-29 18:06:23 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:23.284948 | orchestrator | 2025-08-29 18:06:23 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:23.286275 | orchestrator | 2025-08-29 18:06:23 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:23.286334 | orchestrator | 2025-08-29 18:06:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:26.342207 | orchestrator | 2025-08-29 18:06:26 | INFO  | Task d64e9a55-0a5e-46db-960e-9882af61607e is in state SUCCESS 2025-08-29 18:06:26.344173 | orchestrator | 2025-08-29 18:06:26.344249 | orchestrator | 2025-08-29 18:06:26.344368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:06:26.344384 | orchestrator | 2025-08-29 18:06:26.344396 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:06:26.344407 | orchestrator | Friday 29 August 2025 18:02:46 +0000 (0:00:00.321) 0:00:00.321 ********* 2025-08-29 18:06:26.344418 | orchestrator | ok: [testbed-manager] 2025-08-29 18:06:26.344430 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:06:26.344441 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:06:26.344498 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:06:26.344510 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:06:26.344520 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:06:26.344530 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:06:26.344541 | orchestrator | 2025-08-29 18:06:26.344552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:06:26.344562 | orchestrator | Friday 29 August 2025 18:02:48 +0000 (0:00:01.148) 0:00:01.470 ********* 2025-08-29 18:06:26.344589 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344600 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344611 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344632 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344643 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344653 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344733 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 18:06:26.344746 | orchestrator | 2025-08-29 18:06:26.344757 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 18:06:26.344767 | orchestrator | 2025-08-29 18:06:26.344781 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 18:06:26.344817 | orchestrator | Friday 29 August 2025 18:02:48 +0000 (0:00:00.743) 0:00:02.213 ********* 2025-08-29 18:06:26.344833 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:06:26.344875 | orchestrator | 2025-08-29 18:06:26.344888 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 18:06:26.344900 | orchestrator | Friday 29 August 2025 18:02:50 +0000 (0:00:01.804) 0:00:04.017 ********* 2025-08-29 18:06:26.344916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.344933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.344961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.344977 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 18:06:26.345006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345176 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345249 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 18:06:26.345270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345467 | orchestrator | 2025-08-29 18:06:26.345478 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 18:06:26.345489 | orchestrator | Friday 29 August 2025 18:02:54 +0000 (0:00:03.799) 0:00:07.817 ********* 2025-08-29 18:06:26.345499 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:06:26.345510 | orchestrator | 2025-08-29 18:06:26.345594 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 18:06:26.345605 | orchestrator | Friday 29 August 2025 18:02:56 +0000 (0:00:01.722) 0:00:09.539 ********* 2025-08-29 18:06:26.345697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345712 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345799 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 18:06:26.345818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345917 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.345928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.345961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.345989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.346001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.346075 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.346091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.346115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.346126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.346148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.346166 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 18:06:26.347839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.347936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.347954 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.347966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.347979 | orchestrator | 2025-08-29 18:06:26.347992 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 18:06:26.348004 | orchestrator | Friday 29 August 2025 18:03:02 +0000 (0:00:06.343) 0:00:15.883 ********* 2025-08-29 18:06:26.348017 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 18:06:26.348047 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 18:06:26.348142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348200 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.348220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348516 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.348529 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.348541 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.348563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348644 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.348674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.348747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348780 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.348791 | orchestrator | 2025-08-29 18:06:26.348803 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 18:06:26.348814 | orchestrator | Friday 29 August 2025 18:03:04 +0000 (0:00:01.564) 0:00:17.447 ********* 2025-08-29 18:06:26.348825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 18:06:26.348882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348916 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.348927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.348951 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 18:06:26.348963 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.348981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.348993 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349026 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.349037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349079 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.349105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.349217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 18:06:26.349372 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.349392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.349432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.349500 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.349529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349568 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.349588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 18:06:26.349608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 18:06:26.349658 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.349677 | orchestrator | 2025-08-29 18:06:26.349696 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 18:06:26.349715 | orchestrator | Friday 29 August 2025 18:03:06 +0000 (0:00:02.040) 0:00:19.488 ********* 2025-08-29 18:06:26.349735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349791 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 18:06:26.349811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.349882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349898 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.349914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.349941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.349961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.349980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350177 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350217 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 18:06:26.350236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.350248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.350324 | orchestrator | 2025-08-29 18:06:26.350335 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 18:06:26.350346 | orchestrator | Friday 29 August 2025 18:03:11 +0000 (0:00:05.634) 0:00:25.123 ********* 2025-08-29 18:06:26.350357 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 18:06:26.350368 | orchestrator | 2025-08-29 18:06:26.350379 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 18:06:26.350390 | orchestrator | Friday 29 August 2025 18:03:12 +0000 (0:00:01.270) 0:00:26.393 ********* 2025-08-29 18:06:26.350401 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350418 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350437 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350456 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.350467 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350500 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096663, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9730825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350516 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350533 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350551 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350563 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350585 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350617 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350636 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350655 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350666 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350677 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350688 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350699 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350715 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350858 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096697, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9785707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.350875 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350886 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350909 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350919 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350936 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350963 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350976 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350987 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.350998 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351009 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351020 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351036 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351059 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351071 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351082 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351093 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351104 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351115 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351131 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096655, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9722962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.351154 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351166 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351188 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351210 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351233 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351249 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351261 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351272 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351310 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351322 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351356 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351373 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351385 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096688, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9765425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.351396 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351407 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351418 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351429 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351454 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351466 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351497 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351510 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351523 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351542 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351559 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351572 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351591 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351605 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096647, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9700253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.351618 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351630 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351649 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351667 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351680 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351699 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351725 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351737 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351757 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351775 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351788 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351807 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351821 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351835 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351848 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351865 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351882 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351893 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.351905 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096665, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.973391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.351922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351933 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351945 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351963 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.351974 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.351986 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352029 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352041 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352058 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352070 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352081 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.352092 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352122 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.352133 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352149 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352161 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096685, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9763165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352189 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352211 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352240 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352256 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096670, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9749155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352279 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.352318 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 18:06:26.352330 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.352341 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096659, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9727936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352364 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096695, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9780626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352375 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096640, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9688976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352386 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096718, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.98203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352402 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096692, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9774601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352414 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096652, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.970717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352431 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096646, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9695928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096682, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9758096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352462 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096679, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9753697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352474 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096716, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9818578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 18:06:26.352485 | orchestrator | 2025-08-29 18:06:26.352496 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 18:06:26.352507 | orchestrator | Friday 29 August 2025 18:03:42 +0000 (0:00:29.983) 0:00:56.376 ********* 2025-08-29 18:06:26.352518 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 18:06:26.352529 | orchestrator | 2025-08-29 18:06:26.352540 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 18:06:26.352550 | orchestrator | Friday 29 August 2025 18:03:43 +0000 (0:00:00.760) 0:00:57.137 ********* 2025-08-29 18:06:26.352561 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352583 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352604 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352615 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352626 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352637 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352652 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352663 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352674 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352695 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352706 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352717 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352727 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352738 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352749 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352760 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352778 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352789 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352815 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352837 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352848 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352859 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352869 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352880 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352890 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352901 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.352912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352922 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 18:06:26.352933 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 18:06:26.352944 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 18:06:26.352954 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 18:06:26.352965 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:06:26.352976 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 18:06:26.352987 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 18:06:26.352998 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 18:06:26.353008 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 18:06:26.353019 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 18:06:26.353029 | orchestrator | 2025-08-29 18:06:26.353040 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 18:06:26.353051 | orchestrator | Friday 29 August 2025 18:03:46 +0000 (0:00:02.529) 0:00:59.667 ********* 2025-08-29 18:06:26.353062 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 18:06:26.353072 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.353083 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 18:06:26.353094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.353104 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 18:06:26.353115 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.353126 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 18:06:26.353136 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.353147 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 18:06:26.353158 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.353168 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 18:06:26.353179 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.353190 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 18:06:26.353201 | orchestrator | 2025-08-29 18:06:26.353211 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 18:06:26.353222 | orchestrator | Friday 29 August 2025 18:04:09 +0000 (0:00:22.768) 0:01:22.435 ********* 2025-08-29 18:06:26.353233 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 18:06:26.353243 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.353254 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 18:06:26.353271 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.353282 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 18:06:26.353351 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.353362 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 18:06:26.353373 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.353384 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 18:06:26.353394 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.353411 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 18:06:26.353422 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.353432 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 18:06:26.353443 | orchestrator | 2025-08-29 18:06:26.353454 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 18:06:26.353463 | orchestrator | Friday 29 August 2025 18:04:13 +0000 (0:00:04.135) 0:01:26.570 ********* 2025-08-29 18:06:26.353473 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 18:06:26.353483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.353493 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 18:06:26.353503 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 18:06:26.353564 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 18:06:26.353577 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.353586 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.353596 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.353605 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 18:06:26.353615 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 18:06:26.353624 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.353634 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 18:06:26.353643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.353652 | orchestrator | 2025-08-29 18:06:26.353662 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 18:06:26.353671 | orchestrator | Friday 29 August 2025 18:04:15 +0000 (0:00:02.110) 0:01:28.681 ********* 2025-08-29 18:06:26.353680 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 18:06:26.353690 | orchestrator | 2025-08-29 18:06:26.353699 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 18:06:26.353709 | orchestrator | Friday 29 August 2025 18:04:16 +0000 (0:00:00.793) 0:01:29.474 ********* 2025-08-29 18:06:26.353718 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.353727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.353737 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.353746 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.353755 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.353765 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.353774 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.353784 | orchestrator | 2025-08-29 18:06:26.353793 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 18:06:26.353809 | orchestrator | Friday 29 August 2025 18:04:16 +0000 (0:00:00.610) 0:01:30.085 ********* 2025-08-29 18:06:26.353819 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.353828 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.353838 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.353847 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.353857 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:26.353866 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:26.353875 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:26.353885 | orchestrator | 2025-08-29 18:06:26.353894 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 18:06:26.353905 | orchestrator | Friday 29 August 2025 18:04:19 +0000 (0:00:02.999) 0:01:33.084 ********* 2025-08-29 18:06:26.353921 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.353938 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.353953 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.353968 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.353984 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.354000 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.354065 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.354080 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.354090 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.354099 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.354109 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.354119 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.354128 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 18:06:26.354138 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.354147 | orchestrator | 2025-08-29 18:06:26.354157 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 18:06:26.354167 | orchestrator | Friday 29 August 2025 18:04:23 +0000 (0:00:03.395) 0:01:36.480 ********* 2025-08-29 18:06:26.354176 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 18:06:26.354186 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.354202 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 18:06:26.354212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.354221 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 18:06:26.354231 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.354241 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 18:06:26.354250 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.354260 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 18:06:26.354269 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.354279 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 18:06:26.354312 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.354327 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 18:06:26.354341 | orchestrator | 2025-08-29 18:06:26.354359 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 18:06:26.354378 | orchestrator | Friday 29 August 2025 18:04:25 +0000 (0:00:02.427) 0:01:38.908 ********* 2025-08-29 18:06:26.354388 | orchestrator | [WARNING]: Skipped 2025-08-29 18:06:26.354397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 18:06:26.354407 | orchestrator | due to this access issue: 2025-08-29 18:06:26.354416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 18:06:26.354426 | orchestrator | not a directory 2025-08-29 18:06:26.354435 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 18:06:26.354445 | orchestrator | 2025-08-29 18:06:26.354455 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 18:06:26.354464 | orchestrator | Friday 29 August 2025 18:04:27 +0000 (0:00:01.675) 0:01:40.584 ********* 2025-08-29 18:06:26.354474 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.354548 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.354559 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.354568 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.354578 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.354587 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.354597 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.354606 | orchestrator | 2025-08-29 18:06:26.354616 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 18:06:26.354626 | orchestrator | Friday 29 August 2025 18:04:28 +0000 (0:00:01.099) 0:01:41.683 ********* 2025-08-29 18:06:26.354635 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.354645 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:06:26.354654 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:06:26.354664 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:06:26.354673 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:06:26.354683 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:06:26.354692 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:06:26.354702 | orchestrator | 2025-08-29 18:06:26.354711 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 18:06:26.354721 | orchestrator | Friday 29 August 2025 18:04:29 +0000 (0:00:01.281) 0:01:42.964 ********* 2025-08-29 18:06:26.354732 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 18:06:26.354748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.354861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.354892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.354908 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 18:06:26.354926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.354942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.354952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.354963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.354973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.354983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.354993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.355018 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.355033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.355044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.355054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.355064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.355074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 18:06:26.355091 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 18:06:26.355119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.355152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.355174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.355192 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 18:06:26.355209 | orchestrator | 2025-08-29 18:06:26.355226 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 18:06:26.355243 | orchestrator | Friday 29 August 2025 18:04:33 +0000 (0:00:04.130) 0:01:47.094 ********* 2025-08-29 18:06:26.355258 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 18:06:26.355281 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:06:26.355325 | orchestrator | 2025-08-29 18:06:26.355342 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355359 | orchestrator | Friday 29 August 2025 18:04:36 +0000 (0:00:02.936) 0:01:50.031 ********* 2025-08-29 18:06:26.355374 | orchestrator | 2025-08-29 18:06:26.355391 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355408 | orchestrator | Friday 29 August 2025 18:04:36 +0000 (0:00:00.205) 0:01:50.236 ********* 2025-08-29 18:06:26.355424 | orchestrator | 2025-08-29 18:06:26.355440 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355457 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.187) 0:01:50.427 ********* 2025-08-29 18:06:26.355473 | orchestrator | 2025-08-29 18:06:26.355501 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355518 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.313) 0:01:50.741 ********* 2025-08-29 18:06:26.355534 | orchestrator | 2025-08-29 18:06:26.355550 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355566 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.142) 0:01:50.884 ********* 2025-08-29 18:06:26.355582 | orchestrator | 2025-08-29 18:06:26.355598 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355614 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.138) 0:01:51.022 ********* 2025-08-29 18:06:26.355631 | orchestrator | 2025-08-29 18:06:26.355647 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 18:06:26.355664 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.134) 0:01:51.156 ********* 2025-08-29 18:06:26.355680 | orchestrator | 2025-08-29 18:06:26.355696 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 18:06:26.355712 | orchestrator | Friday 29 August 2025 18:04:37 +0000 (0:00:00.178) 0:01:51.335 ********* 2025-08-29 18:06:26.355729 | orchestrator | changed: [testbed-manager] 2025-08-29 18:06:26.355745 | orchestrator | 2025-08-29 18:06:26.355761 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 18:06:26.355778 | orchestrator | Friday 29 August 2025 18:04:59 +0000 (0:00:21.698) 0:02:13.033 ********* 2025-08-29 18:06:26.355794 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:26.355810 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:06:26.355827 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:06:26.355843 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:26.355859 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:06:26.355882 | orchestrator | changed: [testbed-manager] 2025-08-29 18:06:26.355899 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:26.355915 | orchestrator | 2025-08-29 18:06:26.355931 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 18:06:26.355948 | orchestrator | Friday 29 August 2025 18:05:14 +0000 (0:00:14.536) 0:02:27.569 ********* 2025-08-29 18:06:26.355964 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:26.355981 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:26.355998 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:26.356015 | orchestrator | 2025-08-29 18:06:26.356031 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 18:06:26.356048 | orchestrator | Friday 29 August 2025 18:05:19 +0000 (0:00:05.469) 0:02:33.039 ********* 2025-08-29 18:06:26.356065 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:26.356081 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:26.356097 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:26.356113 | orchestrator | 2025-08-29 18:06:26.356129 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 18:06:26.356145 | orchestrator | Friday 29 August 2025 18:05:27 +0000 (0:00:07.446) 0:02:40.485 ********* 2025-08-29 18:06:26.356162 | orchestrator | changed: [testbed-manager] 2025-08-29 18:06:26.356179 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:26.356199 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:06:26.356209 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:26.356219 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:06:26.356228 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:06:26.356238 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:26.356247 | orchestrator | 2025-08-29 18:06:26.356257 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 18:06:26.356267 | orchestrator | Friday 29 August 2025 18:05:43 +0000 (0:00:16.299) 0:02:56.785 ********* 2025-08-29 18:06:26.356276 | orchestrator | changed: [testbed-manager] 2025-08-29 18:06:26.356346 | orchestrator | 2025-08-29 18:06:26.356358 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 18:06:26.356368 | orchestrator | Friday 29 August 2025 18:05:55 +0000 (0:00:11.952) 0:03:08.738 ********* 2025-08-29 18:06:26.356386 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:06:26.356396 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:06:26.356406 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:06:26.356415 | orchestrator | 2025-08-29 18:06:26.356424 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 18:06:26.356434 | orchestrator | Friday 29 August 2025 18:06:05 +0000 (0:00:09.714) 0:03:18.452 ********* 2025-08-29 18:06:26.356443 | orchestrator | changed: [testbed-manager] 2025-08-29 18:06:26.356453 | orchestrator | 2025-08-29 18:06:26.356462 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 18:06:26.356472 | orchestrator | Friday 29 August 2025 18:06:12 +0000 (0:00:07.384) 0:03:25.836 ********* 2025-08-29 18:06:26.356481 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:06:26.356491 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:06:26.356500 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:06:26.356510 | orchestrator | 2025-08-29 18:06:26.356519 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:06:26.356529 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 18:06:26.356540 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 18:06:26.356549 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 18:06:26.356559 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 18:06:26.356569 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 18:06:26.356578 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 18:06:26.356588 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 18:06:26.356597 | orchestrator | 2025-08-29 18:06:26.356607 | orchestrator | 2025-08-29 18:06:26.356616 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:06:26.356626 | orchestrator | Friday 29 August 2025 18:06:23 +0000 (0:00:11.240) 0:03:37.077 ********* 2025-08-29 18:06:26.356634 | orchestrator | =============================================================================== 2025-08-29 18:06:26.356642 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 29.98s 2025-08-29 18:06:26.356649 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.77s 2025-08-29 18:06:26.356657 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.70s 2025-08-29 18:06:26.356665 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.30s 2025-08-29 18:06:26.356673 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.54s 2025-08-29 18:06:26.356680 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.95s 2025-08-29 18:06:26.356688 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.24s 2025-08-29 18:06:26.356701 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.71s 2025-08-29 18:06:26.356709 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.45s 2025-08-29 18:06:26.356716 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 7.38s 2025-08-29 18:06:26.356724 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.34s 2025-08-29 18:06:26.356738 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.63s 2025-08-29 18:06:26.356745 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.47s 2025-08-29 18:06:26.356753 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.14s 2025-08-29 18:06:26.356761 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.13s 2025-08-29 18:06:26.356769 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.80s 2025-08-29 18:06:26.356776 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.40s 2025-08-29 18:06:26.356784 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.00s 2025-08-29 18:06:26.356797 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 2.94s 2025-08-29 18:06:26.356805 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.53s 2025-08-29 18:06:26.356813 | orchestrator | 2025-08-29 18:06:26 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:26.356821 | orchestrator | 2025-08-29 18:06:26 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:26.356829 | orchestrator | 2025-08-29 18:06:26 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:26.357715 | orchestrator | 2025-08-29 18:06:26 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:26.357950 | orchestrator | 2025-08-29 18:06:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:29.409769 | orchestrator | 2025-08-29 18:06:29 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:29.411280 | orchestrator | 2025-08-29 18:06:29 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:29.415544 | orchestrator | 2025-08-29 18:06:29 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:29.416841 | orchestrator | 2025-08-29 18:06:29 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:29.416864 | orchestrator | 2025-08-29 18:06:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:32.456577 | orchestrator | 2025-08-29 18:06:32 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:32.460217 | orchestrator | 2025-08-29 18:06:32 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:32.461969 | orchestrator | 2025-08-29 18:06:32 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:32.463993 | orchestrator | 2025-08-29 18:06:32 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:32.464018 | orchestrator | 2025-08-29 18:06:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:35.507635 | orchestrator | 2025-08-29 18:06:35 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:35.508481 | orchestrator | 2025-08-29 18:06:35 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:35.510193 | orchestrator | 2025-08-29 18:06:35 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:35.511751 | orchestrator | 2025-08-29 18:06:35 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:35.511786 | orchestrator | 2025-08-29 18:06:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:38.558947 | orchestrator | 2025-08-29 18:06:38 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:38.561555 | orchestrator | 2025-08-29 18:06:38 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:38.563964 | orchestrator | 2025-08-29 18:06:38 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:38.565957 | orchestrator | 2025-08-29 18:06:38 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:38.565981 | orchestrator | 2025-08-29 18:06:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:41.610237 | orchestrator | 2025-08-29 18:06:41 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:41.611307 | orchestrator | 2025-08-29 18:06:41 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:41.611881 | orchestrator | 2025-08-29 18:06:41 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:41.612881 | orchestrator | 2025-08-29 18:06:41 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:41.612907 | orchestrator | 2025-08-29 18:06:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:44.659146 | orchestrator | 2025-08-29 18:06:44 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:44.660749 | orchestrator | 2025-08-29 18:06:44 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:44.662972 | orchestrator | 2025-08-29 18:06:44 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:44.664623 | orchestrator | 2025-08-29 18:06:44 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:44.664649 | orchestrator | 2025-08-29 18:06:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:47.707480 | orchestrator | 2025-08-29 18:06:47 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:47.708323 | orchestrator | 2025-08-29 18:06:47 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:47.710686 | orchestrator | 2025-08-29 18:06:47 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:47.714063 | orchestrator | 2025-08-29 18:06:47 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:47.714135 | orchestrator | 2025-08-29 18:06:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:50.766128 | orchestrator | 2025-08-29 18:06:50 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:50.767042 | orchestrator | 2025-08-29 18:06:50 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:50.768126 | orchestrator | 2025-08-29 18:06:50 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:50.769234 | orchestrator | 2025-08-29 18:06:50 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:50.769371 | orchestrator | 2025-08-29 18:06:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:53.810280 | orchestrator | 2025-08-29 18:06:53 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:53.810420 | orchestrator | 2025-08-29 18:06:53 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:53.811284 | orchestrator | 2025-08-29 18:06:53 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:53.812135 | orchestrator | 2025-08-29 18:06:53 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:53.812155 | orchestrator | 2025-08-29 18:06:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:56.848110 | orchestrator | 2025-08-29 18:06:56 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:56.849726 | orchestrator | 2025-08-29 18:06:56 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:56.850795 | orchestrator | 2025-08-29 18:06:56 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:56.852078 | orchestrator | 2025-08-29 18:06:56 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:56.852101 | orchestrator | 2025-08-29 18:06:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:06:59.899371 | orchestrator | 2025-08-29 18:06:59 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:06:59.899551 | orchestrator | 2025-08-29 18:06:59 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:06:59.900170 | orchestrator | 2025-08-29 18:06:59 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:06:59.901167 | orchestrator | 2025-08-29 18:06:59 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:06:59.901189 | orchestrator | 2025-08-29 18:06:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:02.946446 | orchestrator | 2025-08-29 18:07:02 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:02.947426 | orchestrator | 2025-08-29 18:07:02 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:02.948910 | orchestrator | 2025-08-29 18:07:02 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:02.950332 | orchestrator | 2025-08-29 18:07:02 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:02.950368 | orchestrator | 2025-08-29 18:07:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:05.983349 | orchestrator | 2025-08-29 18:07:05 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:05.984364 | orchestrator | 2025-08-29 18:07:05 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:05.985034 | orchestrator | 2025-08-29 18:07:05 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:05.985705 | orchestrator | 2025-08-29 18:07:05 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:05.985815 | orchestrator | 2025-08-29 18:07:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:09.018126 | orchestrator | 2025-08-29 18:07:09 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:09.018400 | orchestrator | 2025-08-29 18:07:09 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:09.019311 | orchestrator | 2025-08-29 18:07:09 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:09.019871 | orchestrator | 2025-08-29 18:07:09 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:09.020068 | orchestrator | 2025-08-29 18:07:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:12.054248 | orchestrator | 2025-08-29 18:07:12 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:12.054687 | orchestrator | 2025-08-29 18:07:12 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:12.057187 | orchestrator | 2025-08-29 18:07:12 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:12.058515 | orchestrator | 2025-08-29 18:07:12 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:12.058567 | orchestrator | 2025-08-29 18:07:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:15.088800 | orchestrator | 2025-08-29 18:07:15 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:15.089429 | orchestrator | 2025-08-29 18:07:15 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:15.090354 | orchestrator | 2025-08-29 18:07:15 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:15.091342 | orchestrator | 2025-08-29 18:07:15 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:15.091905 | orchestrator | 2025-08-29 18:07:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:18.120694 | orchestrator | 2025-08-29 18:07:18 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:18.122521 | orchestrator | 2025-08-29 18:07:18 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:18.124523 | orchestrator | 2025-08-29 18:07:18 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:18.127227 | orchestrator | 2025-08-29 18:07:18 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:18.127356 | orchestrator | 2025-08-29 18:07:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:21.158374 | orchestrator | 2025-08-29 18:07:21 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:21.158493 | orchestrator | 2025-08-29 18:07:21 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:21.158950 | orchestrator | 2025-08-29 18:07:21 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:21.159561 | orchestrator | 2025-08-29 18:07:21 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:21.159588 | orchestrator | 2025-08-29 18:07:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:24.187503 | orchestrator | 2025-08-29 18:07:24 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:24.187766 | orchestrator | 2025-08-29 18:07:24 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:24.188584 | orchestrator | 2025-08-29 18:07:24 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:24.189404 | orchestrator | 2025-08-29 18:07:24 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:24.189434 | orchestrator | 2025-08-29 18:07:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:27.218170 | orchestrator | 2025-08-29 18:07:27 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:27.219608 | orchestrator | 2025-08-29 18:07:27 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:27.221104 | orchestrator | 2025-08-29 18:07:27 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:27.221898 | orchestrator | 2025-08-29 18:07:27 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:27.221928 | orchestrator | 2025-08-29 18:07:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:30.251694 | orchestrator | 2025-08-29 18:07:30 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:30.252054 | orchestrator | 2025-08-29 18:07:30 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:30.252977 | orchestrator | 2025-08-29 18:07:30 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state STARTED 2025-08-29 18:07:30.253693 | orchestrator | 2025-08-29 18:07:30 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:30.253716 | orchestrator | 2025-08-29 18:07:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:33.285386 | orchestrator | 2025-08-29 18:07:33 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:33.285491 | orchestrator | 2025-08-29 18:07:33 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:33.286932 | orchestrator | 2025-08-29 18:07:33 | INFO  | Task 810a4f29-c46b-48f5-bdee-fafec5fdbe77 is in state SUCCESS 2025-08-29 18:07:33.289123 | orchestrator | 2025-08-29 18:07:33.289164 | orchestrator | 2025-08-29 18:07:33.289176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:07:33.289186 | orchestrator | 2025-08-29 18:07:33.289196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:07:33.289207 | orchestrator | Friday 29 August 2025 18:03:28 +0000 (0:00:00.574) 0:00:00.574 ********* 2025-08-29 18:07:33.289217 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:07:33.289228 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:07:33.289237 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:07:33.289247 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:07:33.289256 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:07:33.289265 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:07:33.289275 | orchestrator | 2025-08-29 18:07:33.289285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:07:33.289325 | orchestrator | Friday 29 August 2025 18:03:28 +0000 (0:00:00.739) 0:00:01.314 ********* 2025-08-29 18:07:33.289336 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 18:07:33.289346 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 18:07:33.289356 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 18:07:33.289365 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 18:07:33.289374 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 18:07:33.289384 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 18:07:33.289394 | orchestrator | 2025-08-29 18:07:33.289457 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 18:07:33.289470 | orchestrator | 2025-08-29 18:07:33.289506 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 18:07:33.289516 | orchestrator | Friday 29 August 2025 18:03:29 +0000 (0:00:00.553) 0:00:01.868 ********* 2025-08-29 18:07:33.289526 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:07:33.289538 | orchestrator | 2025-08-29 18:07:33.289548 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 18:07:33.289557 | orchestrator | Friday 29 August 2025 18:03:31 +0000 (0:00:01.612) 0:00:03.480 ********* 2025-08-29 18:07:33.289567 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 18:07:33.289577 | orchestrator | 2025-08-29 18:07:33.289587 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 18:07:33.289596 | orchestrator | Friday 29 August 2025 18:03:34 +0000 (0:00:03.264) 0:00:06.745 ********* 2025-08-29 18:07:33.289607 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 18:07:33.289616 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 18:07:33.289626 | orchestrator | 2025-08-29 18:07:33.289635 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 18:07:33.289673 | orchestrator | Friday 29 August 2025 18:03:39 +0000 (0:00:05.531) 0:00:12.277 ********* 2025-08-29 18:07:33.289683 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:07:33.289693 | orchestrator | 2025-08-29 18:07:33.289702 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 18:07:33.289726 | orchestrator | Friday 29 August 2025 18:03:43 +0000 (0:00:03.182) 0:00:15.459 ********* 2025-08-29 18:07:33.289737 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:07:33.289748 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 18:07:33.289759 | orchestrator | 2025-08-29 18:07:33.289770 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 18:07:33.289781 | orchestrator | Friday 29 August 2025 18:03:47 +0000 (0:00:03.966) 0:00:19.426 ********* 2025-08-29 18:07:33.289792 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:07:33.289802 | orchestrator | 2025-08-29 18:07:33.289813 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 18:07:33.289825 | orchestrator | Friday 29 August 2025 18:03:50 +0000 (0:00:03.372) 0:00:22.798 ********* 2025-08-29 18:07:33.289836 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 18:07:33.289847 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 18:07:33.289858 | orchestrator | 2025-08-29 18:07:33.289870 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 18:07:33.289881 | orchestrator | Friday 29 August 2025 18:03:56 +0000 (0:00:06.500) 0:00:29.299 ********* 2025-08-29 18:07:33.289908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.289923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.289934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.289959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.289972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.289984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.290139 | orchestrator | 2025-08-29 18:07:33.290149 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 18:07:33.290159 | orchestrator | Friday 29 August 2025 18:04:00 +0000 (0:00:03.546) 0:00:32.845 ********* 2025-08-29 18:07:33.290169 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.290178 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.290188 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.290197 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.290207 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.290216 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.290226 | orchestrator | 2025-08-29 18:07:33.290235 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 18:07:33.290244 | orchestrator | Friday 29 August 2025 18:04:01 +0000 (0:00:00.616) 0:00:33.462 ********* 2025-08-29 18:07:33.290254 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.290278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.290288 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.290326 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:07:33.290337 | orchestrator | 2025-08-29 18:07:33.290346 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 18:07:33.290356 | orchestrator | Friday 29 August 2025 18:04:02 +0000 (0:00:01.451) 0:00:34.914 ********* 2025-08-29 18:07:33.290366 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 18:07:33.290375 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 18:07:33.290385 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 18:07:33.290394 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 18:07:33.290403 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 18:07:33.290413 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 18:07:33.290422 | orchestrator | 2025-08-29 18:07:33.290431 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 18:07:33.290441 | orchestrator | Friday 29 August 2025 18:04:04 +0000 (0:00:02.185) 0:00:37.099 ********* 2025-08-29 18:07:33.290457 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 18:07:33.290468 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 18:07:33.290485 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 18:07:33.290496 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 18:07:33.290516 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 18:07:33.290530 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 18:07:33.290541 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 18:07:33.290557 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 18:07:33.290574 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 18:07:33.290584 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 18:07:33.290599 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 18:07:33.290610 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 18:07:33.290619 | orchestrator | 2025-08-29 18:07:33.290629 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 18:07:33.290638 | orchestrator | Friday 29 August 2025 18:04:08 +0000 (0:00:03.883) 0:00:40.982 ********* 2025-08-29 18:07:33.290648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:07:33.290659 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:07:33.290668 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 18:07:33.290678 | orchestrator | 2025-08-29 18:07:33.290694 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 18:07:33.290704 | orchestrator | Friday 29 August 2025 18:04:12 +0000 (0:00:03.546) 0:00:44.529 ********* 2025-08-29 18:07:33.290729 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 18:07:33.290740 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 18:07:33.290749 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 18:07:33.290758 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 18:07:33.290768 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 18:07:33.290777 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 18:07:33.290787 | orchestrator | 2025-08-29 18:07:33.290796 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 18:07:33.290806 | orchestrator | Friday 29 August 2025 18:04:15 +0000 (0:00:03.449) 0:00:47.979 ********* 2025-08-29 18:07:33.290815 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 18:07:33.290825 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 18:07:33.290834 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 18:07:33.290844 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 18:07:33.290853 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 18:07:33.290862 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 18:07:33.290872 | orchestrator | 2025-08-29 18:07:33.290881 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 18:07:33.290891 | orchestrator | Friday 29 August 2025 18:04:16 +0000 (0:00:01.197) 0:00:49.176 ********* 2025-08-29 18:07:33.290900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.290910 | orchestrator | 2025-08-29 18:07:33.290919 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 18:07:33.290929 | orchestrator | Friday 29 August 2025 18:04:16 +0000 (0:00:00.158) 0:00:49.335 ********* 2025-08-29 18:07:33.290938 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.290947 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.290957 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.290966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.291017 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.291052 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.291062 | orchestrator | 2025-08-29 18:07:33.291072 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 18:07:33.291081 | orchestrator | Friday 29 August 2025 18:04:18 +0000 (0:00:01.238) 0:00:50.574 ********* 2025-08-29 18:07:33.291092 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:07:33.291103 | orchestrator | 2025-08-29 18:07:33.291112 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 18:07:33.291121 | orchestrator | Friday 29 August 2025 18:04:19 +0000 (0:00:01.304) 0:00:51.879 ********* 2025-08-29 18:07:33.291137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.291155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.291174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291184 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.291214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291366 | orchestrator | 2025-08-29 18:07:33.291376 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 18:07:33.291385 | orchestrator | Friday 29 August 2025 18:04:23 +0000 (0:00:04.300) 0:00:56.179 ********* 2025-08-29 18:07:33.291402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.291412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.291433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.291443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.291485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291522 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.291532 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.291541 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.291552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291586 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.291596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291623 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.291632 | orchestrator | 2025-08-29 18:07:33.291642 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 18:07:33.291652 | orchestrator | Friday 29 August 2025 18:04:25 +0000 (0:00:01.997) 0:00:58.177 ********* 2025-08-29 18:07:33.291662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.291672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.291703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.291723 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.291738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.291749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291759 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.291769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291799 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.291809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291834 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.291844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.291869 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.291877 | orchestrator | 2025-08-29 18:07:33.291885 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 18:07:33.291893 | orchestrator | Friday 29 August 2025 18:04:27 +0000 (0:00:01.992) 0:01:00.169 ********* 2025-08-29 18:07:33.291905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.291913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.291927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.291936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291963 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.291993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292034 | orchestrator | 2025-08-29 18:07:33.292042 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 18:07:33.292050 | orchestrator | Friday 29 August 2025 18:04:31 +0000 (0:00:03.266) 0:01:03.436 ********* 2025-08-29 18:07:33.292058 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 18:07:33.292066 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.292074 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 18:07:33.292082 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.292101 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 18:07:33.292109 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.292126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 18:07:33.292134 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 18:07:33.292147 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 18:07:33.292155 | orchestrator | 2025-08-29 18:07:33.292162 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 18:07:33.292170 | orchestrator | Friday 29 August 2025 18:04:33 +0000 (0:00:02.489) 0:01:05.925 ********* 2025-08-29 18:07:33.292178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.292193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.292205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.292241 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292355 | orchestrator | 2025-08-29 18:07:33.292362 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 18:07:33.292370 | orchestrator | Friday 29 August 2025 18:04:45 +0000 (0:00:11.467) 0:01:17.393 ********* 2025-08-29 18:07:33.292378 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.292386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.292394 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.292402 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:07:33.292409 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:07:33.292417 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:07:33.292425 | orchestrator | 2025-08-29 18:07:33.292432 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 18:07:33.292440 | orchestrator | Friday 29 August 2025 18:04:48 +0000 (0:00:03.375) 0:01:20.769 ********* 2025-08-29 18:07:33.292452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.292460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.292487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 18:07:33.292503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292535 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.292542 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.292549 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.292555 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.292566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292580 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.292587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 18:07:33.292604 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.292611 | orchestrator | 2025-08-29 18:07:33.292617 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 18:07:33.292629 | orchestrator | Friday 29 August 2025 18:04:50 +0000 (0:00:02.142) 0:01:22.911 ********* 2025-08-29 18:07:33.292636 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.292642 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.292649 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.292655 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.292662 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.292668 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.292674 | orchestrator | 2025-08-29 18:07:33.292681 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 18:07:33.292687 | orchestrator | Friday 29 August 2025 18:04:51 +0000 (0:00:01.106) 0:01:24.017 ********* 2025-08-29 18:07:33.292699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.292714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.292724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 18:07:33.292736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 18:07:33.292856 | orchestrator | 2025-08-29 18:07:33.292863 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 18:07:33.292870 | orchestrator | Friday 29 August 2025 18:04:54 +0000 (0:00:02.571) 0:01:26.589 ********* 2025-08-29 18:07:33.292877 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.292883 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:07:33.292890 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:07:33.292897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:07:33.292903 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:07:33.292910 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:07:33.292916 | orchestrator | 2025-08-29 18:07:33.292923 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 18:07:33.292930 | orchestrator | Friday 29 August 2025 18:04:55 +0000 (0:00:00.794) 0:01:27.383 ********* 2025-08-29 18:07:33.292936 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:07:33.292943 | orchestrator | 2025-08-29 18:07:33.292949 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 18:07:33.292956 | orchestrator | Friday 29 August 2025 18:04:57 +0000 (0:00:02.036) 0:01:29.419 ********* 2025-08-29 18:07:33.292963 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:07:33.292969 | orchestrator | 2025-08-29 18:07:33.292976 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 18:07:33.292982 | orchestrator | Friday 29 August 2025 18:04:59 +0000 (0:00:02.070) 0:01:31.490 ********* 2025-08-29 18:07:33.292989 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:07:33.292996 | orchestrator | 2025-08-29 18:07:33.293002 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 18:07:33.293009 | orchestrator | Friday 29 August 2025 18:05:18 +0000 (0:00:19.683) 0:01:51.174 ********* 2025-08-29 18:07:33.293015 | orchestrator | 2025-08-29 18:07:33.293022 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 18:07:33.293034 | orchestrator | Friday 29 August 2025 18:05:18 +0000 (0:00:00.068) 0:01:51.242 ********* 2025-08-29 18:07:33.293040 | orchestrator | 2025-08-29 18:07:33.293047 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 18:07:33.293053 | orchestrator | Friday 29 August 2025 18:05:18 +0000 (0:00:00.066) 0:01:51.309 ********* 2025-08-29 18:07:33.293060 | orchestrator | 2025-08-29 18:07:33.293066 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 18:07:33.293073 | orchestrator | Friday 29 August 2025 18:05:19 +0000 (0:00:00.064) 0:01:51.373 ********* 2025-08-29 18:07:33.293079 | orchestrator | 2025-08-29 18:07:33.293089 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 18:07:33.293096 | orchestrator | Friday 29 August 2025 18:05:19 +0000 (0:00:00.082) 0:01:51.455 ********* 2025-08-29 18:07:33.293103 | orchestrator | 2025-08-29 18:07:33.293109 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 18:07:33.293116 | orchestrator | Friday 29 August 2025 18:05:19 +0000 (0:00:00.070) 0:01:51.526 ********* 2025-08-29 18:07:33.293123 | orchestrator | 2025-08-29 18:07:33.293129 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 18:07:33.293136 | orchestrator | Friday 29 August 2025 18:05:19 +0000 (0:00:00.067) 0:01:51.593 ********* 2025-08-29 18:07:33.293142 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:07:33.293149 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:07:33.293156 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:07:33.293162 | orchestrator | 2025-08-29 18:07:33.293169 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 18:07:33.293175 | orchestrator | Friday 29 August 2025 18:05:49 +0000 (0:00:30.358) 0:02:21.952 ********* 2025-08-29 18:07:33.293182 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:07:33.293188 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:07:33.293195 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:07:33.293202 | orchestrator | 2025-08-29 18:07:33.293208 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 18:07:33.293215 | orchestrator | Friday 29 August 2025 18:06:00 +0000 (0:00:10.947) 0:02:32.900 ********* 2025-08-29 18:07:33.293221 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:07:33.293228 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:07:33.293234 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:07:33.293241 | orchestrator | 2025-08-29 18:07:33.293248 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 18:07:33.293254 | orchestrator | Friday 29 August 2025 18:07:18 +0000 (0:01:18.258) 0:03:51.159 ********* 2025-08-29 18:07:33.293261 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:07:33.293267 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:07:33.293274 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:07:33.293281 | orchestrator | 2025-08-29 18:07:33.293287 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 18:07:33.293308 | orchestrator | Friday 29 August 2025 18:07:29 +0000 (0:00:11.071) 0:04:02.231 ********* 2025-08-29 18:07:33.293315 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:07:33.293322 | orchestrator | 2025-08-29 18:07:33.293328 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:07:33.293338 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 18:07:33.293346 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 18:07:33.293352 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 18:07:33.293359 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 18:07:33.293370 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 18:07:33.293377 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 18:07:33.293383 | orchestrator | 2025-08-29 18:07:33.293390 | orchestrator | 2025-08-29 18:07:33.293396 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:07:33.293403 | orchestrator | Friday 29 August 2025 18:07:32 +0000 (0:00:02.609) 0:04:04.840 ********* 2025-08-29 18:07:33.293410 | orchestrator | =============================================================================== 2025-08-29 18:07:33.293416 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 78.26s 2025-08-29 18:07:33.293423 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.36s 2025-08-29 18:07:33.293429 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.68s 2025-08-29 18:07:33.293436 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.47s 2025-08-29 18:07:33.293442 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.07s 2025-08-29 18:07:33.293449 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.95s 2025-08-29 18:07:33.293455 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.50s 2025-08-29 18:07:33.293462 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.53s 2025-08-29 18:07:33.293468 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.30s 2025-08-29 18:07:33.293475 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.97s 2025-08-29 18:07:33.293482 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.88s 2025-08-29 18:07:33.293488 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.55s 2025-08-29 18:07:33.293494 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.55s 2025-08-29 18:07:33.293501 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.45s 2025-08-29 18:07:33.293507 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.38s 2025-08-29 18:07:33.293520 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.37s 2025-08-29 18:07:33.293526 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.27s 2025-08-29 18:07:33.293533 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.26s 2025-08-29 18:07:33.293539 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.18s 2025-08-29 18:07:33.293546 | orchestrator | cinder : Wait for cinder services to update service versions ------------ 2.61s 2025-08-29 18:07:33.293552 | orchestrator | 2025-08-29 18:07:33 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:33.293559 | orchestrator | 2025-08-29 18:07:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:36.326501 | orchestrator | 2025-08-29 18:07:36 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:36.327519 | orchestrator | 2025-08-29 18:07:36 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:36.328548 | orchestrator | 2025-08-29 18:07:36 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:36.329553 | orchestrator | 2025-08-29 18:07:36 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:36.329587 | orchestrator | 2025-08-29 18:07:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:39.362176 | orchestrator | 2025-08-29 18:07:39 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:39.362627 | orchestrator | 2025-08-29 18:07:39 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:39.363401 | orchestrator | 2025-08-29 18:07:39 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:39.364143 | orchestrator | 2025-08-29 18:07:39 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:39.364171 | orchestrator | 2025-08-29 18:07:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:42.403341 | orchestrator | 2025-08-29 18:07:42 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:42.404628 | orchestrator | 2025-08-29 18:07:42 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:42.406590 | orchestrator | 2025-08-29 18:07:42 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:42.407571 | orchestrator | 2025-08-29 18:07:42 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:42.407592 | orchestrator | 2025-08-29 18:07:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:45.443869 | orchestrator | 2025-08-29 18:07:45 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:45.444113 | orchestrator | 2025-08-29 18:07:45 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:45.444953 | orchestrator | 2025-08-29 18:07:45 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:45.445872 | orchestrator | 2025-08-29 18:07:45 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:45.445910 | orchestrator | 2025-08-29 18:07:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:48.483156 | orchestrator | 2025-08-29 18:07:48 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:48.483381 | orchestrator | 2025-08-29 18:07:48 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:48.486249 | orchestrator | 2025-08-29 18:07:48 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:48.486911 | orchestrator | 2025-08-29 18:07:48 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:48.486937 | orchestrator | 2025-08-29 18:07:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:51.524850 | orchestrator | 2025-08-29 18:07:51 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:51.524964 | orchestrator | 2025-08-29 18:07:51 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:51.525491 | orchestrator | 2025-08-29 18:07:51 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:51.526200 | orchestrator | 2025-08-29 18:07:51 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:51.526223 | orchestrator | 2025-08-29 18:07:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:54.577287 | orchestrator | 2025-08-29 18:07:54 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:54.577561 | orchestrator | 2025-08-29 18:07:54 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:54.578537 | orchestrator | 2025-08-29 18:07:54 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:54.579453 | orchestrator | 2025-08-29 18:07:54 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:54.579504 | orchestrator | 2025-08-29 18:07:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:07:57.628166 | orchestrator | 2025-08-29 18:07:57 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:07:57.629226 | orchestrator | 2025-08-29 18:07:57 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:07:57.696880 | orchestrator | 2025-08-29 18:07:57 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:07:57.696950 | orchestrator | 2025-08-29 18:07:57 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:07:57.696965 | orchestrator | 2025-08-29 18:07:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:00.665592 | orchestrator | 2025-08-29 18:08:00 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:00.666238 | orchestrator | 2025-08-29 18:08:00 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:00.667088 | orchestrator | 2025-08-29 18:08:00 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:00.668168 | orchestrator | 2025-08-29 18:08:00 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:00.668190 | orchestrator | 2025-08-29 18:08:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:03.709867 | orchestrator | 2025-08-29 18:08:03 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:03.710177 | orchestrator | 2025-08-29 18:08:03 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:03.711212 | orchestrator | 2025-08-29 18:08:03 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:03.713092 | orchestrator | 2025-08-29 18:08:03 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:03.713180 | orchestrator | 2025-08-29 18:08:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:06.740702 | orchestrator | 2025-08-29 18:08:06 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:06.740958 | orchestrator | 2025-08-29 18:08:06 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:06.741537 | orchestrator | 2025-08-29 18:08:06 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:06.742223 | orchestrator | 2025-08-29 18:08:06 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:06.742249 | orchestrator | 2025-08-29 18:08:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:09.773295 | orchestrator | 2025-08-29 18:08:09 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:09.773686 | orchestrator | 2025-08-29 18:08:09 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:09.774425 | orchestrator | 2025-08-29 18:08:09 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:09.775153 | orchestrator | 2025-08-29 18:08:09 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:09.775175 | orchestrator | 2025-08-29 18:08:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:12.801528 | orchestrator | 2025-08-29 18:08:12 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:12.801710 | orchestrator | 2025-08-29 18:08:12 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:12.802116 | orchestrator | 2025-08-29 18:08:12 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:12.802770 | orchestrator | 2025-08-29 18:08:12 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:12.802845 | orchestrator | 2025-08-29 18:08:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:15.825860 | orchestrator | 2025-08-29 18:08:15 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:15.826111 | orchestrator | 2025-08-29 18:08:15 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:15.826755 | orchestrator | 2025-08-29 18:08:15 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:15.827269 | orchestrator | 2025-08-29 18:08:15 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:15.827293 | orchestrator | 2025-08-29 18:08:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:18.850429 | orchestrator | 2025-08-29 18:08:18 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:18.851969 | orchestrator | 2025-08-29 18:08:18 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:18.852767 | orchestrator | 2025-08-29 18:08:18 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:18.853645 | orchestrator | 2025-08-29 18:08:18 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:18.853667 | orchestrator | 2025-08-29 18:08:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:21.883052 | orchestrator | 2025-08-29 18:08:21 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:21.883528 | orchestrator | 2025-08-29 18:08:21 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:21.884351 | orchestrator | 2025-08-29 18:08:21 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:21.885293 | orchestrator | 2025-08-29 18:08:21 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:21.885339 | orchestrator | 2025-08-29 18:08:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:24.917573 | orchestrator | 2025-08-29 18:08:24 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:24.917789 | orchestrator | 2025-08-29 18:08:24 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:24.919025 | orchestrator | 2025-08-29 18:08:24 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:24.919881 | orchestrator | 2025-08-29 18:08:24 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:24.919904 | orchestrator | 2025-08-29 18:08:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:27.952256 | orchestrator | 2025-08-29 18:08:27 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:27.952529 | orchestrator | 2025-08-29 18:08:27 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:27.953385 | orchestrator | 2025-08-29 18:08:27 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:27.956197 | orchestrator | 2025-08-29 18:08:27 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:27.956224 | orchestrator | 2025-08-29 18:08:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:30.977906 | orchestrator | 2025-08-29 18:08:30 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:30.978532 | orchestrator | 2025-08-29 18:08:30 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:30.979099 | orchestrator | 2025-08-29 18:08:30 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state STARTED 2025-08-29 18:08:30.979978 | orchestrator | 2025-08-29 18:08:30 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:30.980003 | orchestrator | 2025-08-29 18:08:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:34.007820 | orchestrator | 2025-08-29 18:08:34 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:34.008782 | orchestrator | 2025-08-29 18:08:34 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:34.010582 | orchestrator | 2025-08-29 18:08:34 | INFO  | Task c12fbf0d-902a-4ec4-8639-8b688408fac0 is in state SUCCESS 2025-08-29 18:08:34.012114 | orchestrator | 2025-08-29 18:08:34.012146 | orchestrator | 2025-08-29 18:08:34.012160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:08:34.012172 | orchestrator | 2025-08-29 18:08:34.012183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:08:34.012194 | orchestrator | Friday 29 August 2025 18:06:28 +0000 (0:00:00.279) 0:00:00.279 ********* 2025-08-29 18:08:34.012205 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:08:34.012275 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:08:34.012287 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:08:34.012298 | orchestrator | 2025-08-29 18:08:34.012379 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:08:34.012394 | orchestrator | Friday 29 August 2025 18:06:28 +0000 (0:00:00.286) 0:00:00.566 ********* 2025-08-29 18:08:34.012405 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 18:08:34.012417 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 18:08:34.012428 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 18:08:34.012439 | orchestrator | 2025-08-29 18:08:34.012450 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 18:08:34.012461 | orchestrator | 2025-08-29 18:08:34.012472 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 18:08:34.012483 | orchestrator | Friday 29 August 2025 18:06:29 +0000 (0:00:00.459) 0:00:01.026 ********* 2025-08-29 18:08:34.012494 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:08:34.012506 | orchestrator | 2025-08-29 18:08:34.012517 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 18:08:34.012528 | orchestrator | Friday 29 August 2025 18:06:29 +0000 (0:00:00.550) 0:00:01.577 ********* 2025-08-29 18:08:34.012539 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 18:08:34.012550 | orchestrator | 2025-08-29 18:08:34.012561 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 18:08:34.012572 | orchestrator | Friday 29 August 2025 18:06:33 +0000 (0:00:03.208) 0:00:04.785 ********* 2025-08-29 18:08:34.012583 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 18:08:34.012594 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 18:08:34.012605 | orchestrator | 2025-08-29 18:08:34.012616 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 18:08:34.012627 | orchestrator | Friday 29 August 2025 18:06:39 +0000 (0:00:05.981) 0:00:10.766 ********* 2025-08-29 18:08:34.012639 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:08:34.012650 | orchestrator | 2025-08-29 18:08:34.012661 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 18:08:34.012698 | orchestrator | Friday 29 August 2025 18:06:42 +0000 (0:00:03.045) 0:00:13.812 ********* 2025-08-29 18:08:34.012710 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:08:34.012724 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 18:08:34.012737 | orchestrator | 2025-08-29 18:08:34.012750 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 18:08:34.012763 | orchestrator | Friday 29 August 2025 18:06:45 +0000 (0:00:03.784) 0:00:17.597 ********* 2025-08-29 18:08:34.012776 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:08:34.012789 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 18:08:34.012802 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 18:08:34.012815 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 18:08:34.012828 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 18:08:34.012840 | orchestrator | 2025-08-29 18:08:34.012854 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 18:08:34.012865 | orchestrator | Friday 29 August 2025 18:07:00 +0000 (0:00:14.916) 0:00:32.514 ********* 2025-08-29 18:08:34.012876 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 18:08:34.012887 | orchestrator | 2025-08-29 18:08:34.012898 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 18:08:34.012909 | orchestrator | Friday 29 August 2025 18:07:04 +0000 (0:00:04.060) 0:00:36.574 ********* 2025-08-29 18:08:34.012925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.012962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.012976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.012996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013089 | orchestrator | 2025-08-29 18:08:34.013101 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 18:08:34.013112 | orchestrator | Friday 29 August 2025 18:07:07 +0000 (0:00:02.846) 0:00:39.420 ********* 2025-08-29 18:08:34.013123 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 18:08:34.013134 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 18:08:34.013145 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 18:08:34.013156 | orchestrator | 2025-08-29 18:08:34.013167 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 18:08:34.013178 | orchestrator | Friday 29 August 2025 18:07:08 +0000 (0:00:01.302) 0:00:40.723 ********* 2025-08-29 18:08:34.013189 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.013201 | orchestrator | 2025-08-29 18:08:34.013212 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 18:08:34.013222 | orchestrator | Friday 29 August 2025 18:07:09 +0000 (0:00:00.218) 0:00:40.941 ********* 2025-08-29 18:08:34.013233 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.013245 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:08:34.013256 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:08:34.013267 | orchestrator | 2025-08-29 18:08:34.013278 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 18:08:34.013289 | orchestrator | Friday 29 August 2025 18:07:10 +0000 (0:00:01.323) 0:00:42.264 ********* 2025-08-29 18:08:34.013300 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:08:34.013353 | orchestrator | 2025-08-29 18:08:34.013365 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 18:08:34.013376 | orchestrator | Friday 29 August 2025 18:07:11 +0000 (0:00:00.837) 0:00:43.102 ********* 2025-08-29 18:08:34.013387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.013411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.013431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.013443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.013530 | orchestrator | 2025-08-29 18:08:34.013541 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 18:08:34.013552 | orchestrator | Friday 29 August 2025 18:07:14 +0000 (0:00:03.421) 0:00:46.523 ********* 2025-08-29 18:08:34.013563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.013575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.013629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013652 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:08:34.013663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.013674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.013686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:08:34.013726 | orchestrator | 2025-08-29 18:08:34.013743 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 18:08:34.013755 | orchestrator | Friday 29 August 2025 18:07:16 +0000 (0:00:01.748) 0:00:48.271 ********* 2025-08-29 18:08:34.013770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.013782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.013816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.013828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:08:34.013887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.013898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.013920 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:08:34.013931 | orchestrator | 2025-08-29 18:08:34.013942 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 18:08:34.013953 | orchestrator | Friday 29 August 2025 18:07:17 +0000 (0:00:00.898) 0:00:49.169 ********* 2025-08-29 18:08:34.013964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.014297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.014339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.014351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014443 | orchestrator | 2025-08-29 18:08:34.014454 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 18:08:34.014465 | orchestrator | Friday 29 August 2025 18:07:20 +0000 (0:00:03.239) 0:00:52.408 ********* 2025-08-29 18:08:34.014476 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:08:34.014487 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.014498 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:08:34.014508 | orchestrator | 2025-08-29 18:08:34.014519 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 18:08:34.014530 | orchestrator | Friday 29 August 2025 18:07:23 +0000 (0:00:02.368) 0:00:54.777 ********* 2025-08-29 18:08:34.014540 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:08:34.014551 | orchestrator | 2025-08-29 18:08:34.014562 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 18:08:34.014573 | orchestrator | Friday 29 August 2025 18:07:24 +0000 (0:00:01.308) 0:00:56.085 ********* 2025-08-29 18:08:34.014583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.014594 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:08:34.014604 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:08:34.014615 | orchestrator | 2025-08-29 18:08:34.014626 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 18:08:34.014636 | orchestrator | Friday 29 August 2025 18:07:25 +0000 (0:00:00.739) 0:00:56.824 ********* 2025-08-29 18:08:34.014647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.014671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.014688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.014700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.014785 | orchestrator | 2025-08-29 18:08:34.014795 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 18:08:34.014806 | orchestrator | Friday 29 August 2025 18:07:38 +0000 (0:00:13.206) 0:01:10.031 ********* 2025-08-29 18:08:34.014817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.014829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.014847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.014858 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.014875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.014892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.014906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.014920 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:08:34.014933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 18:08:34.014952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.014966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:08:34.014979 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:08:34.014991 | orchestrator | 2025-08-29 18:08:34.015004 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 18:08:34.015016 | orchestrator | Friday 29 August 2025 18:07:40 +0000 (0:00:01.767) 0:01:11.799 ********* 2025-08-29 18:08:34.015041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.015056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.015071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 18:08:34.015091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.015104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.015123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.015146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.015160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.015182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:08:34.015193 | orchestrator | 2025-08-29 18:08:34.015204 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 18:08:34.015215 | orchestrator | Friday 29 August 2025 18:07:43 +0000 (0:00:03.589) 0:01:15.388 ********* 2025-08-29 18:08:34.015226 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:08:34.015236 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:08:34.015247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:08:34.015258 | orchestrator | 2025-08-29 18:08:34.015269 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 18:08:34.015279 | orchestrator | Friday 29 August 2025 18:07:44 +0000 (0:00:00.617) 0:01:16.005 ********* 2025-08-29 18:08:34.015290 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.015301 | orchestrator | 2025-08-29 18:08:34.015362 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 18:08:34.015376 | orchestrator | Friday 29 August 2025 18:07:46 +0000 (0:00:02.368) 0:01:18.374 ********* 2025-08-29 18:08:34.015387 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.015398 | orchestrator | 2025-08-29 18:08:34.015409 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 18:08:34.015420 | orchestrator | Friday 29 August 2025 18:07:48 +0000 (0:00:02.247) 0:01:20.622 ********* 2025-08-29 18:08:34.015431 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.015442 | orchestrator | 2025-08-29 18:08:34.015453 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 18:08:34.015465 | orchestrator | Friday 29 August 2025 18:08:01 +0000 (0:00:12.524) 0:01:33.146 ********* 2025-08-29 18:08:34.015476 | orchestrator | 2025-08-29 18:08:34.015487 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 18:08:34.015498 | orchestrator | Friday 29 August 2025 18:08:01 +0000 (0:00:00.235) 0:01:33.381 ********* 2025-08-29 18:08:34.015509 | orchestrator | 2025-08-29 18:08:34.015520 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 18:08:34.015531 | orchestrator | Friday 29 August 2025 18:08:01 +0000 (0:00:00.262) 0:01:33.643 ********* 2025-08-29 18:08:34.015542 | orchestrator | 2025-08-29 18:08:34.015553 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 18:08:34.015564 | orchestrator | Friday 29 August 2025 18:08:02 +0000 (0:00:00.269) 0:01:33.913 ********* 2025-08-29 18:08:34.015575 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.015587 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:08:34.015598 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:08:34.015609 | orchestrator | 2025-08-29 18:08:34.015620 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 18:08:34.015632 | orchestrator | Friday 29 August 2025 18:08:10 +0000 (0:00:08.069) 0:01:41.983 ********* 2025-08-29 18:08:34.015643 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:08:34.015654 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.015672 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:08:34.015684 | orchestrator | 2025-08-29 18:08:34.015695 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 18:08:34.015706 | orchestrator | Friday 29 August 2025 18:08:21 +0000 (0:00:11.577) 0:01:53.560 ********* 2025-08-29 18:08:34.015717 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:08:34.015728 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:08:34.015747 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:08:34.015758 | orchestrator | 2025-08-29 18:08:34.015769 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:08:34.015787 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 18:08:34.015800 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 18:08:34.015811 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 18:08:34.015822 | orchestrator | 2025-08-29 18:08:34.015834 | orchestrator | 2025-08-29 18:08:34.015845 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:08:34.015856 | orchestrator | Friday 29 August 2025 18:08:32 +0000 (0:00:11.123) 0:02:04.684 ********* 2025-08-29 18:08:34.015867 | orchestrator | =============================================================================== 2025-08-29 18:08:34.015878 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.92s 2025-08-29 18:08:34.015889 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 13.21s 2025-08-29 18:08:34.015900 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.52s 2025-08-29 18:08:34.015911 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.58s 2025-08-29 18:08:34.015922 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.12s 2025-08-29 18:08:34.015933 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.07s 2025-08-29 18:08:34.015944 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.98s 2025-08-29 18:08:34.015956 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.06s 2025-08-29 18:08:34.015967 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.78s 2025-08-29 18:08:34.015978 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.59s 2025-08-29 18:08:34.015990 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.42s 2025-08-29 18:08:34.016001 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.24s 2025-08-29 18:08:34.016012 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.21s 2025-08-29 18:08:34.016023 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.05s 2025-08-29 18:08:34.016034 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.85s 2025-08-29 18:08:34.016046 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.37s 2025-08-29 18:08:34.016057 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.37s 2025-08-29 18:08:34.016068 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.25s 2025-08-29 18:08:34.016079 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.77s 2025-08-29 18:08:34.016090 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.75s 2025-08-29 18:08:34.016101 | orchestrator | 2025-08-29 18:08:34 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:34.016113 | orchestrator | 2025-08-29 18:08:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:37.087018 | orchestrator | 2025-08-29 18:08:37 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:37.087542 | orchestrator | 2025-08-29 18:08:37 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:37.088709 | orchestrator | 2025-08-29 18:08:37 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:37.090156 | orchestrator | 2025-08-29 18:08:37 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:37.090182 | orchestrator | 2025-08-29 18:08:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:40.115851 | orchestrator | 2025-08-29 18:08:40 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:40.116807 | orchestrator | 2025-08-29 18:08:40 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:40.117062 | orchestrator | 2025-08-29 18:08:40 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:40.117702 | orchestrator | 2025-08-29 18:08:40 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:40.117727 | orchestrator | 2025-08-29 18:08:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:43.145485 | orchestrator | 2025-08-29 18:08:43 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:43.145982 | orchestrator | 2025-08-29 18:08:43 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:43.146529 | orchestrator | 2025-08-29 18:08:43 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:43.147111 | orchestrator | 2025-08-29 18:08:43 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:43.147142 | orchestrator | 2025-08-29 18:08:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:46.169794 | orchestrator | 2025-08-29 18:08:46 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:46.169901 | orchestrator | 2025-08-29 18:08:46 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:46.170767 | orchestrator | 2025-08-29 18:08:46 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:46.171370 | orchestrator | 2025-08-29 18:08:46 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:46.171394 | orchestrator | 2025-08-29 18:08:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:49.211541 | orchestrator | 2025-08-29 18:08:49 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:49.212400 | orchestrator | 2025-08-29 18:08:49 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:49.214422 | orchestrator | 2025-08-29 18:08:49 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:49.215299 | orchestrator | 2025-08-29 18:08:49 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:49.215336 | orchestrator | 2025-08-29 18:08:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:52.242655 | orchestrator | 2025-08-29 18:08:52 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:52.243183 | orchestrator | 2025-08-29 18:08:52 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:52.244160 | orchestrator | 2025-08-29 18:08:52 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:52.245809 | orchestrator | 2025-08-29 18:08:52 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:52.245832 | orchestrator | 2025-08-29 18:08:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:55.275863 | orchestrator | 2025-08-29 18:08:55 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:55.277478 | orchestrator | 2025-08-29 18:08:55 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:55.278189 | orchestrator | 2025-08-29 18:08:55 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:55.278705 | orchestrator | 2025-08-29 18:08:55 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:55.278728 | orchestrator | 2025-08-29 18:08:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:08:58.305175 | orchestrator | 2025-08-29 18:08:58 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:08:58.305488 | orchestrator | 2025-08-29 18:08:58 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:08:58.305952 | orchestrator | 2025-08-29 18:08:58 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:08:58.306591 | orchestrator | 2025-08-29 18:08:58 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:08:58.306616 | orchestrator | 2025-08-29 18:08:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:01.352887 | orchestrator | 2025-08-29 18:09:01 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:01.352989 | orchestrator | 2025-08-29 18:09:01 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:01.353004 | orchestrator | 2025-08-29 18:09:01 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:01.359133 | orchestrator | 2025-08-29 18:09:01 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:01.359171 | orchestrator | 2025-08-29 18:09:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:04.379070 | orchestrator | 2025-08-29 18:09:04 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:04.379252 | orchestrator | 2025-08-29 18:09:04 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:04.379935 | orchestrator | 2025-08-29 18:09:04 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:04.380492 | orchestrator | 2025-08-29 18:09:04 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:04.380535 | orchestrator | 2025-08-29 18:09:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:07.405859 | orchestrator | 2025-08-29 18:09:07 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:07.406117 | orchestrator | 2025-08-29 18:09:07 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:07.406876 | orchestrator | 2025-08-29 18:09:07 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:07.409932 | orchestrator | 2025-08-29 18:09:07 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:07.409955 | orchestrator | 2025-08-29 18:09:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:10.430698 | orchestrator | 2025-08-29 18:09:10 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:10.433168 | orchestrator | 2025-08-29 18:09:10 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:10.434881 | orchestrator | 2025-08-29 18:09:10 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:10.436828 | orchestrator | 2025-08-29 18:09:10 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:10.437290 | orchestrator | 2025-08-29 18:09:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:13.471989 | orchestrator | 2025-08-29 18:09:13 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:13.472091 | orchestrator | 2025-08-29 18:09:13 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:13.472682 | orchestrator | 2025-08-29 18:09:13 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:13.473311 | orchestrator | 2025-08-29 18:09:13 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:13.473417 | orchestrator | 2025-08-29 18:09:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:16.528811 | orchestrator | 2025-08-29 18:09:16 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:16.529619 | orchestrator | 2025-08-29 18:09:16 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:16.530131 | orchestrator | 2025-08-29 18:09:16 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:16.530854 | orchestrator | 2025-08-29 18:09:16 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:16.530881 | orchestrator | 2025-08-29 18:09:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:19.555442 | orchestrator | 2025-08-29 18:09:19 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:19.556505 | orchestrator | 2025-08-29 18:09:19 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:19.557683 | orchestrator | 2025-08-29 18:09:19 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:19.558786 | orchestrator | 2025-08-29 18:09:19 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:19.558820 | orchestrator | 2025-08-29 18:09:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:22.611652 | orchestrator | 2025-08-29 18:09:22 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:22.613632 | orchestrator | 2025-08-29 18:09:22 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:22.616059 | orchestrator | 2025-08-29 18:09:22 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:22.618141 | orchestrator | 2025-08-29 18:09:22 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:22.618527 | orchestrator | 2025-08-29 18:09:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:25.657819 | orchestrator | 2025-08-29 18:09:25 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:25.658888 | orchestrator | 2025-08-29 18:09:25 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:25.665004 | orchestrator | 2025-08-29 18:09:25 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:25.666735 | orchestrator | 2025-08-29 18:09:25 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:25.666784 | orchestrator | 2025-08-29 18:09:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:28.714949 | orchestrator | 2025-08-29 18:09:28 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:28.716624 | orchestrator | 2025-08-29 18:09:28 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:28.718436 | orchestrator | 2025-08-29 18:09:28 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:28.720241 | orchestrator | 2025-08-29 18:09:28 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:28.720794 | orchestrator | 2025-08-29 18:09:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:31.798572 | orchestrator | 2025-08-29 18:09:31 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:31.802324 | orchestrator | 2025-08-29 18:09:31 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:31.804124 | orchestrator | 2025-08-29 18:09:31 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:31.806110 | orchestrator | 2025-08-29 18:09:31 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:31.806134 | orchestrator | 2025-08-29 18:09:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:34.858068 | orchestrator | 2025-08-29 18:09:34 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:34.859649 | orchestrator | 2025-08-29 18:09:34 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:34.860465 | orchestrator | 2025-08-29 18:09:34 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:34.862429 | orchestrator | 2025-08-29 18:09:34 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:34.862731 | orchestrator | 2025-08-29 18:09:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:37.897724 | orchestrator | 2025-08-29 18:09:37 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:37.897826 | orchestrator | 2025-08-29 18:09:37 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:37.900560 | orchestrator | 2025-08-29 18:09:37 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:37.901934 | orchestrator | 2025-08-29 18:09:37 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:37.903328 | orchestrator | 2025-08-29 18:09:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:40.957259 | orchestrator | 2025-08-29 18:09:40 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:40.957407 | orchestrator | 2025-08-29 18:09:40 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:40.957424 | orchestrator | 2025-08-29 18:09:40 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:40.958718 | orchestrator | 2025-08-29 18:09:40 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state STARTED 2025-08-29 18:09:40.958858 | orchestrator | 2025-08-29 18:09:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:44.015289 | orchestrator | 2025-08-29 18:09:44 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:44.015750 | orchestrator | 2025-08-29 18:09:44 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:44.016604 | orchestrator | 2025-08-29 18:09:44 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:09:44.017576 | orchestrator | 2025-08-29 18:09:44 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:44.018556 | orchestrator | 2025-08-29 18:09:44 | INFO  | Task 262e78c2-4f5c-4026-9de5-cbf848998a32 is in state SUCCESS 2025-08-29 18:09:44.018582 | orchestrator | 2025-08-29 18:09:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:47.062076 | orchestrator | 2025-08-29 18:09:47 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:47.063223 | orchestrator | 2025-08-29 18:09:47 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:47.064715 | orchestrator | 2025-08-29 18:09:47 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:09:47.065878 | orchestrator | 2025-08-29 18:09:47 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:47.065901 | orchestrator | 2025-08-29 18:09:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:50.114588 | orchestrator | 2025-08-29 18:09:50 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:50.116206 | orchestrator | 2025-08-29 18:09:50 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:50.119050 | orchestrator | 2025-08-29 18:09:50 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:09:50.120690 | orchestrator | 2025-08-29 18:09:50 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:50.120882 | orchestrator | 2025-08-29 18:09:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:53.171902 | orchestrator | 2025-08-29 18:09:53 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:53.174540 | orchestrator | 2025-08-29 18:09:53 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:53.175548 | orchestrator | 2025-08-29 18:09:53 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:09:53.177849 | orchestrator | 2025-08-29 18:09:53 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:53.177909 | orchestrator | 2025-08-29 18:09:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:56.224041 | orchestrator | 2025-08-29 18:09:56 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:56.224233 | orchestrator | 2025-08-29 18:09:56 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:56.226284 | orchestrator | 2025-08-29 18:09:56 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:09:56.228503 | orchestrator | 2025-08-29 18:09:56 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:56.228536 | orchestrator | 2025-08-29 18:09:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:09:59.295047 | orchestrator | 2025-08-29 18:09:59 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:09:59.295535 | orchestrator | 2025-08-29 18:09:59 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:09:59.297300 | orchestrator | 2025-08-29 18:09:59 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:09:59.299026 | orchestrator | 2025-08-29 18:09:59 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:09:59.299053 | orchestrator | 2025-08-29 18:09:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:02.339587 | orchestrator | 2025-08-29 18:10:02 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:02.341248 | orchestrator | 2025-08-29 18:10:02 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:02.342736 | orchestrator | 2025-08-29 18:10:02 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:02.344041 | orchestrator | 2025-08-29 18:10:02 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:02.344094 | orchestrator | 2025-08-29 18:10:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:05.400344 | orchestrator | 2025-08-29 18:10:05 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:05.401337 | orchestrator | 2025-08-29 18:10:05 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:05.412316 | orchestrator | 2025-08-29 18:10:05 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:05.412407 | orchestrator | 2025-08-29 18:10:05 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:05.412422 | orchestrator | 2025-08-29 18:10:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:08.460450 | orchestrator | 2025-08-29 18:10:08 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:08.461255 | orchestrator | 2025-08-29 18:10:08 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:08.462225 | orchestrator | 2025-08-29 18:10:08 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:08.463621 | orchestrator | 2025-08-29 18:10:08 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:08.463655 | orchestrator | 2025-08-29 18:10:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:11.534953 | orchestrator | 2025-08-29 18:10:11 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:11.536560 | orchestrator | 2025-08-29 18:10:11 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:11.539676 | orchestrator | 2025-08-29 18:10:11 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:11.541987 | orchestrator | 2025-08-29 18:10:11 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:11.542316 | orchestrator | 2025-08-29 18:10:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:14.579019 | orchestrator | 2025-08-29 18:10:14 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:14.581290 | orchestrator | 2025-08-29 18:10:14 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:14.584580 | orchestrator | 2025-08-29 18:10:14 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:14.586949 | orchestrator | 2025-08-29 18:10:14 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:14.587633 | orchestrator | 2025-08-29 18:10:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:17.627429 | orchestrator | 2025-08-29 18:10:17 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:17.628267 | orchestrator | 2025-08-29 18:10:17 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:17.628296 | orchestrator | 2025-08-29 18:10:17 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:17.628308 | orchestrator | 2025-08-29 18:10:17 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:17.628319 | orchestrator | 2025-08-29 18:10:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:20.674065 | orchestrator | 2025-08-29 18:10:20 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:20.679703 | orchestrator | 2025-08-29 18:10:20 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:20.681260 | orchestrator | 2025-08-29 18:10:20 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:20.682667 | orchestrator | 2025-08-29 18:10:20 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:20.682693 | orchestrator | 2025-08-29 18:10:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:23.728032 | orchestrator | 2025-08-29 18:10:23 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:23.730936 | orchestrator | 2025-08-29 18:10:23 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:23.732423 | orchestrator | 2025-08-29 18:10:23 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:23.734870 | orchestrator | 2025-08-29 18:10:23 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:23.734965 | orchestrator | 2025-08-29 18:10:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:26.811706 | orchestrator | 2025-08-29 18:10:26 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:26.812421 | orchestrator | 2025-08-29 18:10:26 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:26.813357 | orchestrator | 2025-08-29 18:10:26 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:26.814206 | orchestrator | 2025-08-29 18:10:26 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:26.814230 | orchestrator | 2025-08-29 18:10:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:29.859560 | orchestrator | 2025-08-29 18:10:29 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:29.860103 | orchestrator | 2025-08-29 18:10:29 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:29.861029 | orchestrator | 2025-08-29 18:10:29 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:29.861691 | orchestrator | 2025-08-29 18:10:29 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:29.861713 | orchestrator | 2025-08-29 18:10:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:32.988932 | orchestrator | 2025-08-29 18:10:32 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:32.989099 | orchestrator | 2025-08-29 18:10:32 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:32.990118 | orchestrator | 2025-08-29 18:10:32 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:32.990584 | orchestrator | 2025-08-29 18:10:32 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:32.990764 | orchestrator | 2025-08-29 18:10:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:36.038586 | orchestrator | 2025-08-29 18:10:36 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:36.040773 | orchestrator | 2025-08-29 18:10:36 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:36.042292 | orchestrator | 2025-08-29 18:10:36 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:36.043854 | orchestrator | 2025-08-29 18:10:36 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:36.044027 | orchestrator | 2025-08-29 18:10:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:39.117127 | orchestrator | 2025-08-29 18:10:39 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:39.118907 | orchestrator | 2025-08-29 18:10:39 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:39.120691 | orchestrator | 2025-08-29 18:10:39 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:39.122801 | orchestrator | 2025-08-29 18:10:39 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:39.123191 | orchestrator | 2025-08-29 18:10:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:42.178527 | orchestrator | 2025-08-29 18:10:42 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:42.180794 | orchestrator | 2025-08-29 18:10:42 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:42.182454 | orchestrator | 2025-08-29 18:10:42 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:42.184163 | orchestrator | 2025-08-29 18:10:42 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:42.184195 | orchestrator | 2025-08-29 18:10:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:45.236558 | orchestrator | 2025-08-29 18:10:45 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:45.237426 | orchestrator | 2025-08-29 18:10:45 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:45.238837 | orchestrator | 2025-08-29 18:10:45 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:45.241128 | orchestrator | 2025-08-29 18:10:45 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:45.241948 | orchestrator | 2025-08-29 18:10:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:48.285304 | orchestrator | 2025-08-29 18:10:48 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:48.286879 | orchestrator | 2025-08-29 18:10:48 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:48.288588 | orchestrator | 2025-08-29 18:10:48 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:48.290605 | orchestrator | 2025-08-29 18:10:48 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:48.290629 | orchestrator | 2025-08-29 18:10:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:51.340264 | orchestrator | 2025-08-29 18:10:51 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:51.344175 | orchestrator | 2025-08-29 18:10:51 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:51.346774 | orchestrator | 2025-08-29 18:10:51 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state STARTED 2025-08-29 18:10:51.349708 | orchestrator | 2025-08-29 18:10:51 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:51.349799 | orchestrator | 2025-08-29 18:10:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:54.406877 | orchestrator | 2025-08-29 18:10:54 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state STARTED 2025-08-29 18:10:54.408719 | orchestrator | 2025-08-29 18:10:54 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:54.410608 | orchestrator | 2025-08-29 18:10:54 | INFO  | Task adfaefdb-4a21-4f7d-9b1f-f5586856b9ec is in state SUCCESS 2025-08-29 18:10:54.412983 | orchestrator | 2025-08-29 18:10:54.413028 | orchestrator | 2025-08-29 18:10:54.413041 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 18:10:54.413078 | orchestrator | 2025-08-29 18:10:54.413091 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 18:10:54.413102 | orchestrator | Friday 29 August 2025 18:08:43 +0000 (0:00:00.222) 0:00:00.222 ********* 2025-08-29 18:10:54.413113 | orchestrator | changed: [localhost] 2025-08-29 18:10:54.413125 | orchestrator | 2025-08-29 18:10:54.413137 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 18:10:54.413148 | orchestrator | Friday 29 August 2025 18:08:44 +0000 (0:00:01.704) 0:00:01.926 ********* 2025-08-29 18:10:54.413159 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-08-29 18:10:54.413169 | orchestrator | changed: [localhost] 2025-08-29 18:10:54.413180 | orchestrator | 2025-08-29 18:10:54.413190 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 18:10:54.413201 | orchestrator | Friday 29 August 2025 18:09:37 +0000 (0:00:52.607) 0:00:54.533 ********* 2025-08-29 18:10:54.413212 | orchestrator | changed: [localhost] 2025-08-29 18:10:54.413222 | orchestrator | 2025-08-29 18:10:54.413233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:10:54.413243 | orchestrator | 2025-08-29 18:10:54.413254 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:10:54.413264 | orchestrator | Friday 29 August 2025 18:09:41 +0000 (0:00:04.339) 0:00:58.873 ********* 2025-08-29 18:10:54.413275 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:10:54.413285 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:10:54.413296 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:10:54.413306 | orchestrator | 2025-08-29 18:10:54.413317 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:10:54.413328 | orchestrator | Friday 29 August 2025 18:09:42 +0000 (0:00:00.335) 0:00:59.209 ********* 2025-08-29 18:10:54.413338 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 18:10:54.413349 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 18:10:54.413361 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 18:10:54.413371 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 18:10:54.413382 | orchestrator | 2025-08-29 18:10:54.413427 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 18:10:54.413440 | orchestrator | skipping: no hosts matched 2025-08-29 18:10:54.413451 | orchestrator | 2025-08-29 18:10:54.413461 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:10:54.413473 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:10:54.413486 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:10:54.413499 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:10:54.413509 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:10:54.413520 | orchestrator | 2025-08-29 18:10:54.413531 | orchestrator | 2025-08-29 18:10:54.413542 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:10:54.413552 | orchestrator | Friday 29 August 2025 18:09:42 +0000 (0:00:00.452) 0:00:59.662 ********* 2025-08-29 18:10:54.413563 | orchestrator | =============================================================================== 2025-08-29 18:10:54.413574 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 52.61s 2025-08-29 18:10:54.413584 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.34s 2025-08-29 18:10:54.413595 | orchestrator | Ensure the destination directory exists --------------------------------- 1.70s 2025-08-29 18:10:54.413605 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-08-29 18:10:54.413625 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-08-29 18:10:54.413636 | orchestrator | 2025-08-29 18:10:54.413646 | orchestrator | 2025-08-29 18:10:54.413657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:10:54.413667 | orchestrator | 2025-08-29 18:10:54.413678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:10:54.413688 | orchestrator | Friday 29 August 2025 18:09:47 +0000 (0:00:00.317) 0:00:00.317 ********* 2025-08-29 18:10:54.413699 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:10:54.413709 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:10:54.413720 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:10:54.413731 | orchestrator | 2025-08-29 18:10:54.413748 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:10:54.413767 | orchestrator | Friday 29 August 2025 18:09:48 +0000 (0:00:00.340) 0:00:00.657 ********* 2025-08-29 18:10:54.413785 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 18:10:54.413803 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 18:10:54.413857 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 18:10:54.413870 | orchestrator | 2025-08-29 18:10:54.413881 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 18:10:54.413891 | orchestrator | 2025-08-29 18:10:54.413902 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 18:10:54.413923 | orchestrator | Friday 29 August 2025 18:09:48 +0000 (0:00:00.526) 0:00:01.184 ********* 2025-08-29 18:10:54.413935 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:10:54.413945 | orchestrator | 2025-08-29 18:10:54.413971 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 18:10:54.413982 | orchestrator | Friday 29 August 2025 18:09:49 +0000 (0:00:00.605) 0:00:01.789 ********* 2025-08-29 18:10:54.413992 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 18:10:54.414003 | orchestrator | 2025-08-29 18:10:54.414013 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 18:10:54.414128 | orchestrator | Friday 29 August 2025 18:09:52 +0000 (0:00:03.386) 0:00:05.175 ********* 2025-08-29 18:10:54.414142 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 18:10:54.414153 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 18:10:54.414163 | orchestrator | 2025-08-29 18:10:54.414174 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 18:10:54.414185 | orchestrator | Friday 29 August 2025 18:09:58 +0000 (0:00:06.193) 0:00:11.369 ********* 2025-08-29 18:10:54.414195 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:10:54.414206 | orchestrator | 2025-08-29 18:10:54.414216 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 18:10:54.414227 | orchestrator | Friday 29 August 2025 18:10:01 +0000 (0:00:03.145) 0:00:14.515 ********* 2025-08-29 18:10:54.414237 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:10:54.414248 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 18:10:54.414258 | orchestrator | 2025-08-29 18:10:54.414269 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 18:10:54.414279 | orchestrator | Friday 29 August 2025 18:10:05 +0000 (0:00:03.642) 0:00:18.157 ********* 2025-08-29 18:10:54.414290 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:10:54.414300 | orchestrator | 2025-08-29 18:10:54.414311 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 18:10:54.414321 | orchestrator | Friday 29 August 2025 18:10:08 +0000 (0:00:03.230) 0:00:21.388 ********* 2025-08-29 18:10:54.414332 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 18:10:54.414352 | orchestrator | 2025-08-29 18:10:54.414363 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 18:10:54.414373 | orchestrator | Friday 29 August 2025 18:10:12 +0000 (0:00:03.998) 0:00:25.386 ********* 2025-08-29 18:10:54.414384 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:54.414421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:54.414432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:54.414443 | orchestrator | 2025-08-29 18:10:54.414453 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 18:10:54.414464 | orchestrator | Friday 29 August 2025 18:10:13 +0000 (0:00:00.335) 0:00:25.722 ********* 2025-08-29 18:10:54.414479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.414494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.414526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.414539 | orchestrator | 2025-08-29 18:10:54.414551 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 18:10:54.414562 | orchestrator | Friday 29 August 2025 18:10:14 +0000 (0:00:01.094) 0:00:26.816 ********* 2025-08-29 18:10:54.414572 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:54.414583 | orchestrator | 2025-08-29 18:10:54.414593 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 18:10:54.414604 | orchestrator | Friday 29 August 2025 18:10:14 +0000 (0:00:00.153) 0:00:26.970 ********* 2025-08-29 18:10:54.414621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:54.414632 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:54.414643 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:54.414653 | orchestrator | 2025-08-29 18:10:54.414664 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 18:10:54.414675 | orchestrator | Friday 29 August 2025 18:10:14 +0000 (0:00:00.608) 0:00:27.578 ********* 2025-08-29 18:10:54.414685 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:10:54.414696 | orchestrator | 2025-08-29 18:10:54.414707 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 18:10:54.414718 | orchestrator | Friday 29 August 2025 18:10:15 +0000 (0:00:00.797) 0:00:28.376 ********* 2025-08-29 18:10:54.414729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.414741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.414765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.414776 | orchestrator | 2025-08-29 18:10:54.414787 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 18:10:54.414798 | orchestrator | Friday 29 August 2025 18:10:17 +0000 (0:00:01.679) 0:00:30.055 ********* 2025-08-29 18:10:54.414809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.414827 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:54.414838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.414850 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:54.414861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.414872 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:54.414883 | orchestrator | 2025-08-29 18:10:54.414893 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 18:10:54.414904 | orchestrator | Friday 29 August 2025 18:10:18 +0000 (0:00:00.929) 0:00:30.985 ********* 2025-08-29 18:10:54.414927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.414945 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:54.414957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.414968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:54.414979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.414991 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:54.415001 | orchestrator | 2025-08-29 18:10:54.415012 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 18:10:54.415023 | orchestrator | Friday 29 August 2025 18:10:19 +0000 (0:00:00.862) 0:00:31.847 ********* 2025-08-29 18:10:54.415034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415095 | orchestrator | 2025-08-29 18:10:54.415106 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 18:10:54.415116 | orchestrator | Friday 29 August 2025 18:10:20 +0000 (0:00:01.335) 0:00:33.182 ********* 2025-08-29 18:10:54.415127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415173 | orchestrator | 2025-08-29 18:10:54.415184 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 18:10:54.415194 | orchestrator | Friday 29 August 2025 18:10:22 +0000 (0:00:02.369) 0:00:35.552 ********* 2025-08-29 18:10:54.415211 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 18:10:54.415222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 18:10:54.415233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 18:10:54.415243 | orchestrator | 2025-08-29 18:10:54.415254 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 18:10:54.415265 | orchestrator | Friday 29 August 2025 18:10:24 +0000 (0:00:01.630) 0:00:37.182 ********* 2025-08-29 18:10:54.415275 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:54.415286 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:54.415297 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:54.415307 | orchestrator | 2025-08-29 18:10:54.415318 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 18:10:54.415328 | orchestrator | Friday 29 August 2025 18:10:26 +0000 (0:00:01.621) 0:00:38.804 ********* 2025-08-29 18:10:54.415339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.415351 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:54.415362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.415373 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:54.415384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 18:10:54.415483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:54.415494 | orchestrator | 2025-08-29 18:10:54.415505 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 18:10:54.415516 | orchestrator | Friday 29 August 2025 18:10:26 +0000 (0:00:00.651) 0:00:39.456 ********* 2025-08-29 18:10:54.415543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 18:10:54.415579 | orchestrator | 2025-08-29 18:10:54.415589 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 18:10:54.415600 | orchestrator | Friday 29 August 2025 18:10:29 +0000 (0:00:02.503) 0:00:41.959 ********* 2025-08-29 18:10:54.415611 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:54.415621 | orchestrator | 2025-08-29 18:10:54.415632 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 18:10:54.415643 | orchestrator | Friday 29 August 2025 18:10:31 +0000 (0:00:02.318) 0:00:44.278 ********* 2025-08-29 18:10:54.415653 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:54.415672 | orchestrator | 2025-08-29 18:10:54.415683 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 18:10:54.415693 | orchestrator | Friday 29 August 2025 18:10:34 +0000 (0:00:02.344) 0:00:46.622 ********* 2025-08-29 18:10:54.415704 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:54.415714 | orchestrator | 2025-08-29 18:10:54.415725 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 18:10:54.415736 | orchestrator | Friday 29 August 2025 18:10:46 +0000 (0:00:12.513) 0:00:59.136 ********* 2025-08-29 18:10:54.415746 | orchestrator | 2025-08-29 18:10:54.415757 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 18:10:54.415768 | orchestrator | Friday 29 August 2025 18:10:46 +0000 (0:00:00.068) 0:00:59.204 ********* 2025-08-29 18:10:54.415778 | orchestrator | 2025-08-29 18:10:54.415789 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 18:10:54.415799 | orchestrator | Friday 29 August 2025 18:10:46 +0000 (0:00:00.064) 0:00:59.268 ********* 2025-08-29 18:10:54.415810 | orchestrator | 2025-08-29 18:10:54.415820 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 18:10:54.415831 | orchestrator | Friday 29 August 2025 18:10:46 +0000 (0:00:00.067) 0:00:59.336 ********* 2025-08-29 18:10:54.415841 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:54.415852 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:54.415863 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:54.415873 | orchestrator | 2025-08-29 18:10:54.415884 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:10:54.415900 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 18:10:54.415917 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:10:54.415928 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:10:54.415939 | orchestrator | 2025-08-29 18:10:54.415950 | orchestrator | 2025-08-29 18:10:54.415960 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:10:54.415971 | orchestrator | Friday 29 August 2025 18:10:52 +0000 (0:00:05.445) 0:01:04.781 ********* 2025-08-29 18:10:54.415981 | orchestrator | =============================================================================== 2025-08-29 18:10:54.415992 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.51s 2025-08-29 18:10:54.416003 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.19s 2025-08-29 18:10:54.416014 | orchestrator | placement : Restart placement-api container ----------------------------- 5.45s 2025-08-29 18:10:54.416024 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.00s 2025-08-29 18:10:54.416035 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.64s 2025-08-29 18:10:54.416045 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.39s 2025-08-29 18:10:54.416056 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.23s 2025-08-29 18:10:54.416066 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.15s 2025-08-29 18:10:54.416077 | orchestrator | placement : Check placement containers ---------------------------------- 2.50s 2025-08-29 18:10:54.416088 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.37s 2025-08-29 18:10:54.416098 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2025-08-29 18:10:54.416108 | orchestrator | placement : Creating placement databases -------------------------------- 2.32s 2025-08-29 18:10:54.416119 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.68s 2025-08-29 18:10:54.416130 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.63s 2025-08-29 18:10:54.416148 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.62s 2025-08-29 18:10:54.416159 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2025-08-29 18:10:54.416169 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.09s 2025-08-29 18:10:54.416180 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.93s 2025-08-29 18:10:54.416190 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.86s 2025-08-29 18:10:54.416201 | orchestrator | placement : include_tasks ----------------------------------------------- 0.80s 2025-08-29 18:10:54.416211 | orchestrator | 2025-08-29 18:10:54 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:10:54.416222 | orchestrator | 2025-08-29 18:10:54 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:54.416233 | orchestrator | 2025-08-29 18:10:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:10:57.469758 | orchestrator | 2025-08-29 18:10:57 | INFO  | Task eab70c59-7e96-40eb-a57b-a42e71b1e396 is in state SUCCESS 2025-08-29 18:10:57.470909 | orchestrator | 2025-08-29 18:10:57.470948 | orchestrator | 2025-08-29 18:10:57.470959 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:10:57.470970 | orchestrator | 2025-08-29 18:10:57.470979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:10:57.470989 | orchestrator | Friday 29 August 2025 18:07:40 +0000 (0:00:00.808) 0:00:00.808 ********* 2025-08-29 18:10:57.470998 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:10:57.471009 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:10:57.471018 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:10:57.471027 | orchestrator | 2025-08-29 18:10:57.471035 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:10:57.471043 | orchestrator | Friday 29 August 2025 18:07:41 +0000 (0:00:00.563) 0:00:01.372 ********* 2025-08-29 18:10:57.471052 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 18:10:57.471062 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 18:10:57.471070 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 18:10:57.471078 | orchestrator | 2025-08-29 18:10:57.471086 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 18:10:57.471095 | orchestrator | 2025-08-29 18:10:57.471103 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 18:10:57.471111 | orchestrator | Friday 29 August 2025 18:07:41 +0000 (0:00:00.827) 0:00:02.199 ********* 2025-08-29 18:10:57.471119 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:10:57.471128 | orchestrator | 2025-08-29 18:10:57.471136 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 18:10:57.471144 | orchestrator | Friday 29 August 2025 18:07:42 +0000 (0:00:00.941) 0:00:03.141 ********* 2025-08-29 18:10:57.471153 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 18:10:57.471161 | orchestrator | 2025-08-29 18:10:57.471179 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 18:10:57.471188 | orchestrator | Friday 29 August 2025 18:07:46 +0000 (0:00:03.498) 0:00:06.640 ********* 2025-08-29 18:10:57.471196 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 18:10:57.471205 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 18:10:57.471213 | orchestrator | 2025-08-29 18:10:57.471221 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 18:10:57.471229 | orchestrator | Friday 29 August 2025 18:07:52 +0000 (0:00:06.326) 0:00:12.966 ********* 2025-08-29 18:10:57.471254 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:10:57.471263 | orchestrator | 2025-08-29 18:10:57.471304 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 18:10:57.471313 | orchestrator | Friday 29 August 2025 18:07:55 +0000 (0:00:03.201) 0:00:16.168 ********* 2025-08-29 18:10:57.471321 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:10:57.471329 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 18:10:57.471337 | orchestrator | 2025-08-29 18:10:57.471365 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 18:10:57.471374 | orchestrator | Friday 29 August 2025 18:07:59 +0000 (0:00:03.910) 0:00:20.079 ********* 2025-08-29 18:10:57.471382 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:10:57.471436 | orchestrator | 2025-08-29 18:10:57.471445 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 18:10:57.471453 | orchestrator | Friday 29 August 2025 18:08:03 +0000 (0:00:03.750) 0:00:23.830 ********* 2025-08-29 18:10:57.471513 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 18:10:57.471522 | orchestrator | 2025-08-29 18:10:57.471530 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 18:10:57.471538 | orchestrator | Friday 29 August 2025 18:08:07 +0000 (0:00:04.152) 0:00:27.982 ********* 2025-08-29 18:10:57.471548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.471573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.471583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.471639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471799 | orchestrator | 2025-08-29 18:10:57.471807 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 18:10:57.471815 | orchestrator | Friday 29 August 2025 18:08:10 +0000 (0:00:03.326) 0:00:31.309 ********* 2025-08-29 18:10:57.471822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.471830 | orchestrator | 2025-08-29 18:10:57.471838 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 18:10:57.471846 | orchestrator | Friday 29 August 2025 18:08:11 +0000 (0:00:00.218) 0:00:31.527 ********* 2025-08-29 18:10:57.471853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.471861 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:57.471869 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:57.471876 | orchestrator | 2025-08-29 18:10:57.471884 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 18:10:57.471891 | orchestrator | Friday 29 August 2025 18:08:11 +0000 (0:00:00.448) 0:00:31.975 ********* 2025-08-29 18:10:57.471899 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:10:57.471907 | orchestrator | 2025-08-29 18:10:57.471914 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 18:10:57.471922 | orchestrator | Friday 29 August 2025 18:08:13 +0000 (0:00:01.645) 0:00:33.621 ********* 2025-08-29 18:10:57.471934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.471943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.471960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.471969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.471999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.472134 | orchestrator | 2025-08-29 18:10:57.472142 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 18:10:57.472149 | orchestrator | Friday 29 August 2025 18:08:20 +0000 (0:00:07.177) 0:00:40.799 ********* 2025-08-29 18:10:57.472158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.472180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.472233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472271 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:57.472279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.472673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.472691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.472738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.472759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.472768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:57.472812 | orchestrator | 2025-08-29 18:10:57.472820 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 18:10:57.472828 | orchestrator | Friday 29 August 2025 18:08:23 +0000 (0:00:02.914) 0:00:43.713 ********* 2025-08-29 18:10:57.472841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.472854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.472863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.472883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.472891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.472965 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:57.472973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:57.472982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.472995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.473024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473068 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.473076 | orchestrator | 2025-08-29 18:10:57.473084 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 18:10:57.473092 | orchestrator | Friday 29 August 2025 18:08:26 +0000 (0:00:03.191) 0:00:46.905 ********* 2025-08-29 18:10:57.473100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.473114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.473127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.473135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473344 | orchestrator | 2025-08-29 18:10:57.473353 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 18:10:57.473361 | orchestrator | Friday 29 August 2025 18:08:32 +0000 (0:00:06.389) 0:00:53.295 ********* 2025-08-29 18:10:57.473371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.473385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.473418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.473428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473603 | orchestrator | 2025-08-29 18:10:57.473612 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 18:10:57.473621 | orchestrator | Friday 29 August 2025 18:08:58 +0000 (0:00:25.313) 0:01:18.608 ********* 2025-08-29 18:10:57.473630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 18:10:57.473639 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 18:10:57.473647 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 18:10:57.473656 | orchestrator | 2025-08-29 18:10:57.473665 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 18:10:57.473673 | orchestrator | Friday 29 August 2025 18:09:04 +0000 (0:00:05.895) 0:01:24.503 ********* 2025-08-29 18:10:57.473681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 18:10:57.473688 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 18:10:57.473696 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 18:10:57.473704 | orchestrator | 2025-08-29 18:10:57.473712 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 18:10:57.473719 | orchestrator | Friday 29 August 2025 18:09:08 +0000 (0:00:04.204) 0:01:28.708 ********* 2025-08-29 18:10:57.473733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.473742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.473768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.473777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.473908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.473933 | orchestrator | 2025-08-29 18:10:57.473941 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 18:10:57.473949 | orchestrator | Friday 29 August 2025 18:09:11 +0000 (0:00:03.168) 0:01:31.876 ********* 2025-08-29 18:10:57.473963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.473977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.473989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.473997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474459 | orchestrator | 2025-08-29 18:10:57.474467 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 18:10:57.474475 | orchestrator | Friday 29 August 2025 18:09:14 +0000 (0:00:03.162) 0:01:35.039 ********* 2025-08-29 18:10:57.474483 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.474491 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:57.474499 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:57.474506 | orchestrator | 2025-08-29 18:10:57.474514 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 18:10:57.474522 | orchestrator | Friday 29 August 2025 18:09:15 +0000 (0:00:00.454) 0:01:35.493 ********* 2025-08-29 18:10:57.474543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.474553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.474565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474604 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:57.474617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.474626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.474638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 18:10:57.474663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 18:10:57.474690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:10:57.474740 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.474748 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:57.474756 | orchestrator | 2025-08-29 18:10:57.474764 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 18:10:57.474771 | orchestrator | Friday 29 August 2025 18:09:16 +0000 (0:00:01.106) 0:01:36.600 ********* 2025-08-29 18:10:57.474784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.474793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.474805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 18:10:57.474813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:10:57.474979 | orchestrator | 2025-08-29 18:10:57.474987 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 18:10:57.474994 | orchestrator | Friday 29 August 2025 18:09:20 +0000 (0:00:04.570) 0:01:41.170 ********* 2025-08-29 18:10:57.475002 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:10:57.475009 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:10:57.475016 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:10:57.475024 | orchestrator | 2025-08-29 18:10:57.475031 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 18:10:57.475056 | orchestrator | Friday 29 August 2025 18:09:21 +0000 (0:00:00.517) 0:01:41.688 ********* 2025-08-29 18:10:57.475064 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 18:10:57.475072 | orchestrator | 2025-08-29 18:10:57.475079 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 18:10:57.475087 | orchestrator | Friday 29 August 2025 18:09:23 +0000 (0:00:02.044) 0:01:43.732 ********* 2025-08-29 18:10:57.475094 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 18:10:57.475102 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 18:10:57.475109 | orchestrator | 2025-08-29 18:10:57.475117 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 18:10:57.475128 | orchestrator | Friday 29 August 2025 18:09:25 +0000 (0:00:02.548) 0:01:46.281 ********* 2025-08-29 18:10:57.475136 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475143 | orchestrator | 2025-08-29 18:10:57.475151 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 18:10:57.475158 | orchestrator | Friday 29 August 2025 18:09:42 +0000 (0:00:16.923) 0:02:03.205 ********* 2025-08-29 18:10:57.475165 | orchestrator | 2025-08-29 18:10:57.475173 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 18:10:57.475180 | orchestrator | Friday 29 August 2025 18:09:42 +0000 (0:00:00.105) 0:02:03.311 ********* 2025-08-29 18:10:57.475188 | orchestrator | 2025-08-29 18:10:57.475195 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 18:10:57.475202 | orchestrator | Friday 29 August 2025 18:09:43 +0000 (0:00:00.089) 0:02:03.401 ********* 2025-08-29 18:10:57.475209 | orchestrator | 2025-08-29 18:10:57.475217 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 18:10:57.475225 | orchestrator | Friday 29 August 2025 18:09:43 +0000 (0:00:00.067) 0:02:03.469 ********* 2025-08-29 18:10:57.475232 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475240 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:57.475248 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:57.475255 | orchestrator | 2025-08-29 18:10:57.475263 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 18:10:57.475270 | orchestrator | Friday 29 August 2025 18:09:56 +0000 (0:00:13.562) 0:02:17.031 ********* 2025-08-29 18:10:57.475277 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475283 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:57.475290 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:57.475296 | orchestrator | 2025-08-29 18:10:57.475302 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 18:10:57.475313 | orchestrator | Friday 29 August 2025 18:10:04 +0000 (0:00:07.611) 0:02:24.643 ********* 2025-08-29 18:10:57.475320 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475330 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:57.475336 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:57.475343 | orchestrator | 2025-08-29 18:10:57.475349 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 18:10:57.475356 | orchestrator | Friday 29 August 2025 18:10:16 +0000 (0:00:11.730) 0:02:36.373 ********* 2025-08-29 18:10:57.475362 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475369 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:57.475375 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:57.475382 | orchestrator | 2025-08-29 18:10:57.475401 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 18:10:57.475408 | orchestrator | Friday 29 August 2025 18:10:28 +0000 (0:00:12.072) 0:02:48.446 ********* 2025-08-29 18:10:57.475415 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475421 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:57.475428 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:57.475435 | orchestrator | 2025-08-29 18:10:57.475442 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 18:10:57.475448 | orchestrator | Friday 29 August 2025 18:10:36 +0000 (0:00:08.504) 0:02:56.950 ********* 2025-08-29 18:10:57.475455 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:10:57.475461 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475468 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:10:57.475474 | orchestrator | 2025-08-29 18:10:57.475481 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 18:10:57.475488 | orchestrator | Friday 29 August 2025 18:10:48 +0000 (0:00:11.563) 0:03:08.514 ********* 2025-08-29 18:10:57.475494 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:10:57.475501 | orchestrator | 2025-08-29 18:10:57.475521 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:10:57.475528 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 18:10:57.475535 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 18:10:57.475542 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 18:10:57.475549 | orchestrator | 2025-08-29 18:10:57.475555 | orchestrator | 2025-08-29 18:10:57.475562 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:10:57.475568 | orchestrator | Friday 29 August 2025 18:10:54 +0000 (0:00:06.377) 0:03:14.892 ********* 2025-08-29 18:10:57.475575 | orchestrator | =============================================================================== 2025-08-29 18:10:57.475581 | orchestrator | designate : Copying over designate.conf -------------------------------- 25.31s 2025-08-29 18:10:57.475588 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.92s 2025-08-29 18:10:57.475594 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.56s 2025-08-29 18:10:57.475601 | orchestrator | designate : Restart designate-producer container ----------------------- 12.07s 2025-08-29 18:10:57.475607 | orchestrator | designate : Restart designate-central container ------------------------ 11.73s 2025-08-29 18:10:57.475614 | orchestrator | designate : Restart designate-worker container ------------------------- 11.56s 2025-08-29 18:10:57.475620 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.50s 2025-08-29 18:10:57.475627 | orchestrator | designate : Restart designate-api container ----------------------------- 7.61s 2025-08-29 18:10:57.475633 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.18s 2025-08-29 18:10:57.475644 | orchestrator | designate : Copying over config.json files for services ----------------- 6.39s 2025-08-29 18:10:57.475656 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.38s 2025-08-29 18:10:57.475662 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.33s 2025-08-29 18:10:57.475669 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.90s 2025-08-29 18:10:57.475675 | orchestrator | designate : Check designate containers ---------------------------------- 4.57s 2025-08-29 18:10:57.475682 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.20s 2025-08-29 18:10:57.475688 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.15s 2025-08-29 18:10:57.475694 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.91s 2025-08-29 18:10:57.475701 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.75s 2025-08-29 18:10:57.475707 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.50s 2025-08-29 18:10:57.475714 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.33s 2025-08-29 18:10:57.475720 | orchestrator | 2025-08-29 18:10:57 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:10:57.475727 | orchestrator | 2025-08-29 18:10:57 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:10:57.475734 | orchestrator | 2025-08-29 18:10:57 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:10:57.476762 | orchestrator | 2025-08-29 18:10:57 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state STARTED 2025-08-29 18:10:57.476775 | orchestrator | 2025-08-29 18:10:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:00.518584 | orchestrator | 2025-08-29 18:11:00 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:00.519930 | orchestrator | 2025-08-29 18:11:00 | INFO  | Task 89f0809a-1fd0-4b2f-b93b-b2e73579aa1a is in state STARTED 2025-08-29 18:11:00.521920 | orchestrator | 2025-08-29 18:11:00 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:00.523462 | orchestrator | 2025-08-29 18:11:00 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:00.527989 | orchestrator | 2025-08-29 18:11:00.528028 | orchestrator | 2025-08-29 18:11:00.528039 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:11:00.528050 | orchestrator | 2025-08-29 18:11:00.528060 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:11:00.528070 | orchestrator | Friday 29 August 2025 18:06:15 +0000 (0:00:00.404) 0:00:00.404 ********* 2025-08-29 18:11:00.528080 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:11:00.528091 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:11:00.528100 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:11:00.528109 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:11:00.528119 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:11:00.528128 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:11:00.528138 | orchestrator | 2025-08-29 18:11:00.528147 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:11:00.528157 | orchestrator | Friday 29 August 2025 18:06:15 +0000 (0:00:00.741) 0:00:01.145 ********* 2025-08-29 18:11:00.528166 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 18:11:00.528177 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 18:11:00.528186 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 18:11:00.528196 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 18:11:00.528205 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 18:11:00.528215 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 18:11:00.528245 | orchestrator | 2025-08-29 18:11:00.528256 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 18:11:00.528323 | orchestrator | 2025-08-29 18:11:00.528334 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 18:11:00.528344 | orchestrator | Friday 29 August 2025 18:06:16 +0000 (0:00:00.681) 0:00:01.827 ********* 2025-08-29 18:11:00.528367 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:11:00.528379 | orchestrator | 2025-08-29 18:11:00.528408 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 18:11:00.528419 | orchestrator | Friday 29 August 2025 18:06:17 +0000 (0:00:01.309) 0:00:03.136 ********* 2025-08-29 18:11:00.528428 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:11:00.528438 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:11:00.528447 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:11:00.528541 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:11:00.528551 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:11:00.528560 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:11:00.528570 | orchestrator | 2025-08-29 18:11:00.528579 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 18:11:00.528590 | orchestrator | Friday 29 August 2025 18:06:19 +0000 (0:00:01.310) 0:00:04.447 ********* 2025-08-29 18:11:00.528601 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:11:00.528639 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:11:00.528651 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:11:00.528661 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:11:00.528672 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:11:00.528683 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:11:00.528694 | orchestrator | 2025-08-29 18:11:00.528704 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 18:11:00.528715 | orchestrator | Friday 29 August 2025 18:06:20 +0000 (0:00:01.085) 0:00:05.532 ********* 2025-08-29 18:11:00.528726 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 18:11:00.528739 | orchestrator |  "changed": false, 2025-08-29 18:11:00.528749 | orchestrator |  "msg": "All assertions passed" 2025-08-29 18:11:00.528761 | orchestrator | } 2025-08-29 18:11:00.528772 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 18:11:00.528783 | orchestrator |  "changed": false, 2025-08-29 18:11:00.528794 | orchestrator |  "msg": "All assertions passed" 2025-08-29 18:11:00.528804 | orchestrator | } 2025-08-29 18:11:00.528815 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 18:11:00.528826 | orchestrator |  "changed": false, 2025-08-29 18:11:00.528837 | orchestrator |  "msg": "All assertions passed" 2025-08-29 18:11:00.528848 | orchestrator | } 2025-08-29 18:11:00.528859 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 18:11:00.528869 | orchestrator |  "changed": false, 2025-08-29 18:11:00.528880 | orchestrator |  "msg": "All assertions passed" 2025-08-29 18:11:00.528891 | orchestrator | } 2025-08-29 18:11:00.528902 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 18:11:00.528913 | orchestrator |  "changed": false, 2025-08-29 18:11:00.528924 | orchestrator |  "msg": "All assertions passed" 2025-08-29 18:11:00.528935 | orchestrator | } 2025-08-29 18:11:00.528946 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 18:11:00.528956 | orchestrator |  "changed": false, 2025-08-29 18:11:00.528965 | orchestrator |  "msg": "All assertions passed" 2025-08-29 18:11:00.528975 | orchestrator | } 2025-08-29 18:11:00.528984 | orchestrator | 2025-08-29 18:11:00.528994 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 18:11:00.529003 | orchestrator | Friday 29 August 2025 18:06:21 +0000 (0:00:00.888) 0:00:06.421 ********* 2025-08-29 18:11:00.529013 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.529022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.529032 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.529041 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.529058 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.529068 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.529077 | orchestrator | 2025-08-29 18:11:00.529087 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 18:11:00.529096 | orchestrator | Friday 29 August 2025 18:06:21 +0000 (0:00:00.604) 0:00:07.025 ********* 2025-08-29 18:11:00.529106 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 18:11:00.529115 | orchestrator | 2025-08-29 18:11:00.529135 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 18:11:00.529145 | orchestrator | Friday 29 August 2025 18:06:24 +0000 (0:00:03.114) 0:00:10.139 ********* 2025-08-29 18:11:00.529154 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 18:11:00.529165 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 18:11:00.529174 | orchestrator | 2025-08-29 18:11:00.529195 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 18:11:00.529205 | orchestrator | Friday 29 August 2025 18:06:31 +0000 (0:00:06.415) 0:00:16.554 ********* 2025-08-29 18:11:00.529215 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:11:00.529224 | orchestrator | 2025-08-29 18:11:00.529234 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 18:11:00.529243 | orchestrator | Friday 29 August 2025 18:06:34 +0000 (0:00:03.050) 0:00:19.605 ********* 2025-08-29 18:11:00.529253 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:11:00.529262 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 18:11:00.529272 | orchestrator | 2025-08-29 18:11:00.529282 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 18:11:00.529291 | orchestrator | Friday 29 August 2025 18:06:38 +0000 (0:00:03.883) 0:00:23.489 ********* 2025-08-29 18:11:00.529301 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:11:00.529310 | orchestrator | 2025-08-29 18:11:00.529320 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 18:11:00.529329 | orchestrator | Friday 29 August 2025 18:06:41 +0000 (0:00:03.316) 0:00:26.805 ********* 2025-08-29 18:11:00.529339 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 18:11:00.529348 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 18:11:00.529358 | orchestrator | 2025-08-29 18:11:00.529367 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 18:11:00.529377 | orchestrator | Friday 29 August 2025 18:06:49 +0000 (0:00:07.554) 0:00:34.360 ********* 2025-08-29 18:11:00.529386 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.529422 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.529432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.529442 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.529451 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.529461 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.529470 | orchestrator | 2025-08-29 18:11:00.529480 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 18:11:00.529490 | orchestrator | Friday 29 August 2025 18:06:50 +0000 (0:00:00.911) 0:00:35.271 ********* 2025-08-29 18:11:00.529499 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.529509 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.529518 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.529528 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.529537 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.529547 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.529556 | orchestrator | 2025-08-29 18:11:00.529566 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 18:11:00.529575 | orchestrator | Friday 29 August 2025 18:06:53 +0000 (0:00:03.742) 0:00:39.013 ********* 2025-08-29 18:11:00.529592 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:11:00.529602 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:11:00.529611 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:11:00.529621 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:11:00.529630 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:11:00.529639 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:11:00.529649 | orchestrator | 2025-08-29 18:11:00.529658 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 18:11:00.529668 | orchestrator | Friday 29 August 2025 18:06:55 +0000 (0:00:01.430) 0:00:40.443 ********* 2025-08-29 18:11:00.529678 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.529687 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.529697 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.529706 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.529716 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.529725 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.529735 | orchestrator | 2025-08-29 18:11:00.529744 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 18:11:00.529754 | orchestrator | Friday 29 August 2025 18:06:57 +0000 (0:00:02.670) 0:00:43.114 ********* 2025-08-29 18:11:00.529767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.529794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.529806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.529822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.529833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.529843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.529853 | orchestrator | 2025-08-29 18:11:00.529863 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 18:11:00.529877 | orchestrator | Friday 29 August 2025 18:07:01 +0000 (0:00:03.844) 0:00:46.958 ********* 2025-08-29 18:11:00.529887 | orchestrator | [WARNING]: Skipped 2025-08-29 18:11:00.529897 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 18:11:00.529907 | orchestrator | due to this access issue: 2025-08-29 18:11:00.529916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 18:11:00.529926 | orchestrator | a directory 2025-08-29 18:11:00.529935 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:11:00.529945 | orchestrator | 2025-08-29 18:11:00.529959 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 18:11:00.529969 | orchestrator | Friday 29 August 2025 18:07:02 +0000 (0:00:00.972) 0:00:47.931 ********* 2025-08-29 18:11:00.529979 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:11:00.529990 | orchestrator | 2025-08-29 18:11:00.530000 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 18:11:00.530009 | orchestrator | Friday 29 August 2025 18:07:04 +0000 (0:00:01.377) 0:00:49.308 ********* 2025-08-29 18:11:00.530481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.530508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.530519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.530535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.530582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.530601 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.530611 | orchestrator | 2025-08-29 18:11:00.530621 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 18:11:00.530631 | orchestrator | Friday 29 August 2025 18:07:08 +0000 (0:00:04.426) 0:00:53.735 ********* 2025-08-29 18:11:00.530641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.530651 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.530662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.530672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.530713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.530725 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.530735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.530751 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.530761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.530771 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.530781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.530791 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.530800 | orchestrator | 2025-08-29 18:11:00.530810 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 18:11:00.530819 | orchestrator | Friday 29 August 2025 18:07:11 +0000 (0:00:03.419) 0:00:57.155 ********* 2025-08-29 18:11:00.530842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.530853 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.530892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.530910 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.530920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.530930 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.530940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.530950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.530960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.530970 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.530985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531000 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531010 | orchestrator | 2025-08-29 18:11:00.531020 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 18:11:00.531034 | orchestrator | Friday 29 August 2025 18:07:14 +0000 (0:00:02.543) 0:00:59.698 ********* 2025-08-29 18:11:00.531044 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.531053 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.531063 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.531073 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.531085 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531095 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.531106 | orchestrator | 2025-08-29 18:11:00.531117 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 18:11:00.531128 | orchestrator | Friday 29 August 2025 18:07:17 +0000 (0:00:02.745) 0:01:02.444 ********* 2025-08-29 18:11:00.531139 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.531150 | orchestrator | 2025-08-29 18:11:00.531161 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 18:11:00.531172 | orchestrator | Friday 29 August 2025 18:07:17 +0000 (0:00:00.126) 0:01:02.570 ********* 2025-08-29 18:11:00.531183 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.531194 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.531205 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.531217 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531227 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.531238 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.531249 | orchestrator | 2025-08-29 18:11:00.531260 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 18:11:00.531271 | orchestrator | Friday 29 August 2025 18:07:18 +0000 (0:00:00.689) 0:01:03.259 ********* 2025-08-29 18:11:00.531283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.531294 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.531306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.531317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.531333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.531351 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.531369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531381 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.531413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531426 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531448 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.531457 | orchestrator | 2025-08-29 18:11:00.531467 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 18:11:00.531477 | orchestrator | Friday 29 August 2025 18:07:20 +0000 (0:00:02.703) 0:01:05.963 ********* 2025-08-29 18:11:00.531487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.531534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.531561 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.531571 | orchestrator | 2025-08-29 18:11:00.531581 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 18:11:00.531590 | orchestrator | Friday 29 August 2025 18:07:24 +0000 (0:00:03.861) 0:01:09.825 ********* 2025-08-29 18:11:00.531610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.531657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.531677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.531687 | orchestrator | 2025-08-29 18:11:00.531697 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 18:11:00.531707 | orchestrator | Friday 29 August 2025 18:07:33 +0000 (0:00:09.238) 0:01:19.063 ********* 2025-08-29 18:11:00.531717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531743 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.531753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531763 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.531777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531787 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.531825 | orchestrator | 2025-08-29 18:11:00.531835 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 18:11:00.531845 | orchestrator | Friday 29 August 2025 18:07:37 +0000 (0:00:03.983) 0:01:23.046 ********* 2025-08-29 18:11:00.531854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531864 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:11:00.531873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.531888 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.531898 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:11:00.531907 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:11:00.531917 | orchestrator | 2025-08-29 18:11:00.531926 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 18:11:00.531936 | orchestrator | Friday 29 August 2025 18:07:42 +0000 (0:00:04.456) 0:01:27.503 ********* 2025-08-29 18:11:00.531946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.531965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.531975 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.531999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.532010 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.532031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.532046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.532057 | orchestrator | 2025-08-29 18:11:00.532066 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 18:11:00.532076 | orchestrator | Friday 29 August 2025 18:07:47 +0000 (0:00:05.167) 0:01:32.670 ********* 2025-08-29 18:11:00.532086 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532095 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532104 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532114 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532123 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532133 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532142 | orchestrator | 2025-08-29 18:11:00.532152 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 18:11:00.532161 | orchestrator | Friday 29 August 2025 18:07:49 +0000 (0:00:02.472) 0:01:35.143 ********* 2025-08-29 18:11:00.532170 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532180 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532193 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532203 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532212 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532222 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532231 | orchestrator | 2025-08-29 18:11:00.532241 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 18:11:00.532250 | orchestrator | Friday 29 August 2025 18:07:53 +0000 (0:00:03.440) 0:01:38.583 ********* 2025-08-29 18:11:00.532260 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532269 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532279 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532312 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532322 | orchestrator | 2025-08-29 18:11:00.532331 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 18:11:00.532341 | orchestrator | Friday 29 August 2025 18:07:56 +0000 (0:00:03.475) 0:01:42.058 ********* 2025-08-29 18:11:00.532351 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532360 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532375 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532384 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532457 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532467 | orchestrator | 2025-08-29 18:11:00.532476 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 18:11:00.532486 | orchestrator | Friday 29 August 2025 18:08:00 +0000 (0:00:03.353) 0:01:45.412 ********* 2025-08-29 18:11:00.532495 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532505 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532514 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532523 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532533 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532540 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532548 | orchestrator | 2025-08-29 18:11:00.532556 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 18:11:00.532564 | orchestrator | Friday 29 August 2025 18:08:04 +0000 (0:00:04.374) 0:01:49.786 ********* 2025-08-29 18:11:00.532571 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532579 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532586 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532594 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532602 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532609 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532617 | orchestrator | 2025-08-29 18:11:00.532625 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 18:11:00.532632 | orchestrator | Friday 29 August 2025 18:08:08 +0000 (0:00:03.713) 0:01:53.500 ********* 2025-08-29 18:11:00.532640 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 18:11:00.532648 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532656 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 18:11:00.532663 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532671 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 18:11:00.532679 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532686 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 18:11:00.532694 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 18:11:00.532702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532710 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532718 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 18:11:00.532725 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532733 | orchestrator | 2025-08-29 18:11:00.532741 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 18:11:00.532749 | orchestrator | Friday 29 August 2025 18:08:11 +0000 (0:00:03.374) 0:01:56.874 ********* 2025-08-29 18:11:00.532757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.532770 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.532797 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.532813 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.532829 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.532837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.532845 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.532853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.532866 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532874 | orchestrator | 2025-08-29 18:11:00.532882 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 18:11:00.532893 | orchestrator | Friday 29 August 2025 18:08:14 +0000 (0:00:03.247) 0:02:00.122 ********* 2025-08-29 18:11:00.532906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.532914 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.532922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.532931 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.532939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.532947 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.532955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.532968 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.532979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.532988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.533009 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533017 | orchestrator | 2025-08-29 18:11:00.533025 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 18:11:00.533033 | orchestrator | Friday 29 August 2025 18:08:17 +0000 (0:00:02.593) 0:02:02.715 ********* 2025-08-29 18:11:00.533041 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533048 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533064 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533071 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533079 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533087 | orchestrator | 2025-08-29 18:11:00.533094 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 18:11:00.533102 | orchestrator | Friday 29 August 2025 18:08:19 +0000 (0:00:02.226) 0:02:04.941 ********* 2025-08-29 18:11:00.533110 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533133 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:11:00.533140 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:11:00.533148 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:11:00.533155 | orchestrator | 2025-08-29 18:11:00.533163 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-08-29 18:11:00.533171 | orchestrator | Friday 29 August 2025 18:08:27 +0000 (0:00:07.554) 0:02:12.496 ********* 2025-08-29 18:11:00.533179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533199 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533207 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533214 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533222 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533230 | orchestrator | 2025-08-29 18:11:00.533238 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 18:11:00.533246 | orchestrator | Friday 29 August 2025 18:08:29 +0000 (0:00:02.529) 0:02:15.026 ********* 2025-08-29 18:11:00.533253 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533261 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533269 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533276 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533284 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533292 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533299 | orchestrator | 2025-08-29 18:11:00.533307 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 18:11:00.533315 | orchestrator | Friday 29 August 2025 18:08:32 +0000 (0:00:03.161) 0:02:18.187 ********* 2025-08-29 18:11:00.533323 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533338 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533345 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533353 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533360 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533368 | orchestrator | 2025-08-29 18:11:00.533376 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 18:11:00.533384 | orchestrator | Friday 29 August 2025 18:08:37 +0000 (0:00:04.956) 0:02:23.143 ********* 2025-08-29 18:11:00.533406 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533422 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533437 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533445 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533453 | orchestrator | 2025-08-29 18:11:00.533461 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 18:11:00.533469 | orchestrator | Friday 29 August 2025 18:08:41 +0000 (0:00:03.878) 0:02:27.022 ********* 2025-08-29 18:11:00.533476 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533484 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533492 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533499 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533507 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533515 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533522 | orchestrator | 2025-08-29 18:11:00.533530 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 18:11:00.533542 | orchestrator | Friday 29 August 2025 18:08:45 +0000 (0:00:03.978) 0:02:31.001 ********* 2025-08-29 18:11:00.533550 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533557 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533565 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533573 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533580 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533588 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533595 | orchestrator | 2025-08-29 18:11:00.533603 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 18:11:00.533611 | orchestrator | Friday 29 August 2025 18:08:50 +0000 (0:00:04.529) 0:02:35.530 ********* 2025-08-29 18:11:00.533619 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533631 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533639 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533646 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533654 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533667 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533674 | orchestrator | 2025-08-29 18:11:00.533682 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 18:11:00.533690 | orchestrator | Friday 29 August 2025 18:08:53 +0000 (0:00:02.871) 0:02:38.402 ********* 2025-08-29 18:11:00.533698 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533706 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533713 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533729 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533744 | orchestrator | 2025-08-29 18:11:00.533752 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 18:11:00.533760 | orchestrator | Friday 29 August 2025 18:08:57 +0000 (0:00:04.629) 0:02:43.032 ********* 2025-08-29 18:11:00.533768 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 18:11:00.533776 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533783 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 18:11:00.533791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533799 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 18:11:00.533807 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533815 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 18:11:00.533823 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533830 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 18:11:00.533838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533846 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 18:11:00.533854 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533861 | orchestrator | 2025-08-29 18:11:00.533869 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 18:11:00.533877 | orchestrator | Friday 29 August 2025 18:09:01 +0000 (0:00:03.435) 0:02:46.467 ********* 2025-08-29 18:11:00.533885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.533894 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.533902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.533919 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.533934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.533942 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.533950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 18:11:00.533959 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.533967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.533975 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.533983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 18:11:00.533991 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.533999 | orchestrator | 2025-08-29 18:11:00.534006 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 18:11:00.534044 | orchestrator | Friday 29 August 2025 18:09:04 +0000 (0:00:03.585) 0:02:50.053 ********* 2025-08-29 18:11:00.534065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.534080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.534089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 18:11:00.534098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.534106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.534124 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 18:11:00.534133 | orchestrator | 2025-08-29 18:11:00.534141 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 18:11:00.534153 | orchestrator | Friday 29 August 2025 18:09:09 +0000 (0:00:04.615) 0:02:54.668 ********* 2025-08-29 18:11:00.534161 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:11:00.534169 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:11:00.534177 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:11:00.534184 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:11:00.534192 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:11:00.534200 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:11:00.534207 | orchestrator | 2025-08-29 18:11:00.534215 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 18:11:00.534223 | orchestrator | Friday 29 August 2025 18:09:10 +0000 (0:00:00.656) 0:02:55.325 ********* 2025-08-29 18:11:00.534231 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:11:00.534239 | orchestrator | 2025-08-29 18:11:00.534246 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 18:11:00.534254 | orchestrator | Friday 29 August 2025 18:09:12 +0000 (0:00:02.053) 0:02:57.378 ********* 2025-08-29 18:11:00.534262 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:11:00.534270 | orchestrator | 2025-08-29 18:11:00.534277 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 18:11:00.534285 | orchestrator | Friday 29 August 2025 18:09:14 +0000 (0:00:02.315) 0:02:59.694 ********* 2025-08-29 18:11:00.534293 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:11:00.534301 | orchestrator | 2025-08-29 18:11:00.534308 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 18:11:00.534316 | orchestrator | Friday 29 August 2025 18:09:56 +0000 (0:00:42.468) 0:03:42.163 ********* 2025-08-29 18:11:00.534324 | orchestrator | 2025-08-29 18:11:00.534332 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 18:11:00.534339 | orchestrator | Friday 29 August 2025 18:09:57 +0000 (0:00:00.096) 0:03:42.260 ********* 2025-08-29 18:11:00.534347 | orchestrator | 2025-08-29 18:11:00.534355 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 18:11:00.534363 | orchestrator | Friday 29 August 2025 18:09:57 +0000 (0:00:00.085) 0:03:42.345 ********* 2025-08-29 18:11:00.534370 | orchestrator | 2025-08-29 18:11:00.534378 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 18:11:00.534386 | orchestrator | Friday 29 August 2025 18:09:57 +0000 (0:00:00.082) 0:03:42.427 ********* 2025-08-29 18:11:00.534406 | orchestrator | 2025-08-29 18:11:00.534414 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 18:11:00.534422 | orchestrator | Friday 29 August 2025 18:09:57 +0000 (0:00:00.398) 0:03:42.826 ********* 2025-08-29 18:11:00.534430 | orchestrator | 2025-08-29 18:11:00.534437 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 18:11:00.534450 | orchestrator | Friday 29 August 2025 18:09:57 +0000 (0:00:00.065) 0:03:42.892 ********* 2025-08-29 18:11:00.534458 | orchestrator | 2025-08-29 18:11:00.534466 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 18:11:00.534474 | orchestrator | Friday 29 August 2025 18:09:57 +0000 (0:00:00.144) 0:03:43.036 ********* 2025-08-29 18:11:00.534481 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:11:00.534489 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:11:00.534497 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:11:00.534505 | orchestrator | 2025-08-29 18:11:00.534513 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 18:11:00.534520 | orchestrator | Friday 29 August 2025 18:10:23 +0000 (0:00:25.894) 0:04:08.930 ********* 2025-08-29 18:11:00.534528 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:11:00.534536 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:11:00.534543 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:11:00.534551 | orchestrator | 2025-08-29 18:11:00.534559 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:11:00.534567 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 18:11:00.534575 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 18:11:00.534583 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 18:11:00.534591 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 18:11:00.534599 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 18:11:00.534607 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-08-29 18:11:00.534615 | orchestrator | 2025-08-29 18:11:00.534623 | orchestrator | 2025-08-29 18:11:00.534630 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:11:00.534638 | orchestrator | Friday 29 August 2025 18:10:57 +0000 (0:00:33.935) 0:04:42.865 ********* 2025-08-29 18:11:00.534650 | orchestrator | =============================================================================== 2025-08-29 18:11:00.534658 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.47s 2025-08-29 18:11:00.534666 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 33.94s 2025-08-29 18:11:00.534674 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.89s 2025-08-29 18:11:00.534681 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.24s 2025-08-29 18:11:00.534693 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.55s 2025-08-29 18:11:00.534701 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.55s 2025-08-29 18:11:00.534709 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.42s 2025-08-29 18:11:00.534717 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.17s 2025-08-29 18:11:00.534724 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.96s 2025-08-29 18:11:00.534732 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.63s 2025-08-29 18:11:00.534740 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.62s 2025-08-29 18:11:00.534748 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 4.53s 2025-08-29 18:11:00.534755 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.46s 2025-08-29 18:11:00.534768 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.43s 2025-08-29 18:11:00.534775 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.37s 2025-08-29 18:11:00.534783 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.98s 2025-08-29 18:11:00.534791 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.98s 2025-08-29 18:11:00.534799 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.88s 2025-08-29 18:11:00.534806 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.88s 2025-08-29 18:11:00.534814 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.86s 2025-08-29 18:11:00.534822 | orchestrator | 2025-08-29 18:11:00 | INFO  | Task 30cd6d6b-7a27-4c39-bef1-2e39cc888ac5 is in state SUCCESS 2025-08-29 18:11:00.534829 | orchestrator | 2025-08-29 18:11:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:03.587857 | orchestrator | 2025-08-29 18:11:03 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:03.588736 | orchestrator | 2025-08-29 18:11:03 | INFO  | Task 89f0809a-1fd0-4b2f-b93b-b2e73579aa1a is in state STARTED 2025-08-29 18:11:03.590882 | orchestrator | 2025-08-29 18:11:03 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:03.592576 | orchestrator | 2025-08-29 18:11:03 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:03.592586 | orchestrator | 2025-08-29 18:11:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:06.628180 | orchestrator | 2025-08-29 18:11:06 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:06.630369 | orchestrator | 2025-08-29 18:11:06 | INFO  | Task 89f0809a-1fd0-4b2f-b93b-b2e73579aa1a is in state SUCCESS 2025-08-29 18:11:06.634003 | orchestrator | 2025-08-29 18:11:06 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:06.635520 | orchestrator | 2025-08-29 18:11:06 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:06.637929 | orchestrator | 2025-08-29 18:11:06 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:06.638327 | orchestrator | 2025-08-29 18:11:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:09.677297 | orchestrator | 2025-08-29 18:11:09 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:09.677741 | orchestrator | 2025-08-29 18:11:09 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:09.678612 | orchestrator | 2025-08-29 18:11:09 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:09.679532 | orchestrator | 2025-08-29 18:11:09 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:09.679557 | orchestrator | 2025-08-29 18:11:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:12.731387 | orchestrator | 2025-08-29 18:11:12 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:12.731940 | orchestrator | 2025-08-29 18:11:12 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:12.734218 | orchestrator | 2025-08-29 18:11:12 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:12.736378 | orchestrator | 2025-08-29 18:11:12 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:12.736602 | orchestrator | 2025-08-29 18:11:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:15.780808 | orchestrator | 2025-08-29 18:11:15 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:15.780895 | orchestrator | 2025-08-29 18:11:15 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:15.781534 | orchestrator | 2025-08-29 18:11:15 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:15.782326 | orchestrator | 2025-08-29 18:11:15 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:15.782349 | orchestrator | 2025-08-29 18:11:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:18.818204 | orchestrator | 2025-08-29 18:11:18 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:18.818858 | orchestrator | 2025-08-29 18:11:18 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:18.820115 | orchestrator | 2025-08-29 18:11:18 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:18.822164 | orchestrator | 2025-08-29 18:11:18 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:18.822190 | orchestrator | 2025-08-29 18:11:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:21.860957 | orchestrator | 2025-08-29 18:11:21 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:21.862211 | orchestrator | 2025-08-29 18:11:21 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:21.863443 | orchestrator | 2025-08-29 18:11:21 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:21.864373 | orchestrator | 2025-08-29 18:11:21 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:21.864611 | orchestrator | 2025-08-29 18:11:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:24.907890 | orchestrator | 2025-08-29 18:11:24 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:24.910849 | orchestrator | 2025-08-29 18:11:24 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:24.912490 | orchestrator | 2025-08-29 18:11:24 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:24.916501 | orchestrator | 2025-08-29 18:11:24 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:24.916935 | orchestrator | 2025-08-29 18:11:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:27.961286 | orchestrator | 2025-08-29 18:11:27 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:27.964430 | orchestrator | 2025-08-29 18:11:27 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:27.967386 | orchestrator | 2025-08-29 18:11:27 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:27.969777 | orchestrator | 2025-08-29 18:11:27 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:27.969984 | orchestrator | 2025-08-29 18:11:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:31.022238 | orchestrator | 2025-08-29 18:11:31 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:31.024532 | orchestrator | 2025-08-29 18:11:31 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:31.026040 | orchestrator | 2025-08-29 18:11:31 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:31.028492 | orchestrator | 2025-08-29 18:11:31 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:31.028516 | orchestrator | 2025-08-29 18:11:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:34.070804 | orchestrator | 2025-08-29 18:11:34 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:34.072471 | orchestrator | 2025-08-29 18:11:34 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:34.074141 | orchestrator | 2025-08-29 18:11:34 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:34.075977 | orchestrator | 2025-08-29 18:11:34 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:34.076151 | orchestrator | 2025-08-29 18:11:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:37.127667 | orchestrator | 2025-08-29 18:11:37 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:37.129679 | orchestrator | 2025-08-29 18:11:37 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:37.131102 | orchestrator | 2025-08-29 18:11:37 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:37.134148 | orchestrator | 2025-08-29 18:11:37 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:37.134172 | orchestrator | 2025-08-29 18:11:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:40.184348 | orchestrator | 2025-08-29 18:11:40 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:40.187849 | orchestrator | 2025-08-29 18:11:40 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:40.191632 | orchestrator | 2025-08-29 18:11:40 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:40.193311 | orchestrator | 2025-08-29 18:11:40 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:40.193335 | orchestrator | 2025-08-29 18:11:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:43.230413 | orchestrator | 2025-08-29 18:11:43 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:43.235822 | orchestrator | 2025-08-29 18:11:43 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:43.235859 | orchestrator | 2025-08-29 18:11:43 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:43.235871 | orchestrator | 2025-08-29 18:11:43 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:43.235882 | orchestrator | 2025-08-29 18:11:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:46.267503 | orchestrator | 2025-08-29 18:11:46 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:46.268032 | orchestrator | 2025-08-29 18:11:46 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:46.269724 | orchestrator | 2025-08-29 18:11:46 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:46.269766 | orchestrator | 2025-08-29 18:11:46 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:46.269779 | orchestrator | 2025-08-29 18:11:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:49.307624 | orchestrator | 2025-08-29 18:11:49 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:49.307743 | orchestrator | 2025-08-29 18:11:49 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:49.308414 | orchestrator | 2025-08-29 18:11:49 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:49.309245 | orchestrator | 2025-08-29 18:11:49 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:49.309268 | orchestrator | 2025-08-29 18:11:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:52.346313 | orchestrator | 2025-08-29 18:11:52 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:52.347063 | orchestrator | 2025-08-29 18:11:52 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:52.348598 | orchestrator | 2025-08-29 18:11:52 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:52.349801 | orchestrator | 2025-08-29 18:11:52 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:52.349825 | orchestrator | 2025-08-29 18:11:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:55.397848 | orchestrator | 2025-08-29 18:11:55 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:55.397921 | orchestrator | 2025-08-29 18:11:55 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:55.398760 | orchestrator | 2025-08-29 18:11:55 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:55.399804 | orchestrator | 2025-08-29 18:11:55 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:55.399824 | orchestrator | 2025-08-29 18:11:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:11:58.451440 | orchestrator | 2025-08-29 18:11:58 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:11:58.453663 | orchestrator | 2025-08-29 18:11:58 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:11:58.455360 | orchestrator | 2025-08-29 18:11:58 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:11:58.457715 | orchestrator | 2025-08-29 18:11:58 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:11:58.457747 | orchestrator | 2025-08-29 18:11:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:01.499529 | orchestrator | 2025-08-29 18:12:01 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:01.501934 | orchestrator | 2025-08-29 18:12:01 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:01.505168 | orchestrator | 2025-08-29 18:12:01 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:01.508205 | orchestrator | 2025-08-29 18:12:01 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:01.509133 | orchestrator | 2025-08-29 18:12:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:04.552306 | orchestrator | 2025-08-29 18:12:04 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:04.552762 | orchestrator | 2025-08-29 18:12:04 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:04.553820 | orchestrator | 2025-08-29 18:12:04 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:04.555037 | orchestrator | 2025-08-29 18:12:04 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:04.555129 | orchestrator | 2025-08-29 18:12:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:07.594772 | orchestrator | 2025-08-29 18:12:07 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:07.595153 | orchestrator | 2025-08-29 18:12:07 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:07.598125 | orchestrator | 2025-08-29 18:12:07 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:07.598953 | orchestrator | 2025-08-29 18:12:07 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:07.598977 | orchestrator | 2025-08-29 18:12:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:10.631991 | orchestrator | 2025-08-29 18:12:10 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:10.632107 | orchestrator | 2025-08-29 18:12:10 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:10.632627 | orchestrator | 2025-08-29 18:12:10 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:10.633168 | orchestrator | 2025-08-29 18:12:10 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:10.633189 | orchestrator | 2025-08-29 18:12:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:13.670842 | orchestrator | 2025-08-29 18:12:13 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:13.670958 | orchestrator | 2025-08-29 18:12:13 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:13.671374 | orchestrator | 2025-08-29 18:12:13 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:13.672938 | orchestrator | 2025-08-29 18:12:13 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:13.673028 | orchestrator | 2025-08-29 18:12:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:16.720973 | orchestrator | 2025-08-29 18:12:16 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:16.723703 | orchestrator | 2025-08-29 18:12:16 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:16.726481 | orchestrator | 2025-08-29 18:12:16 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:16.728269 | orchestrator | 2025-08-29 18:12:16 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:16.728591 | orchestrator | 2025-08-29 18:12:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:19.762630 | orchestrator | 2025-08-29 18:12:19 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:19.762752 | orchestrator | 2025-08-29 18:12:19 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:19.763984 | orchestrator | 2025-08-29 18:12:19 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:19.764918 | orchestrator | 2025-08-29 18:12:19 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:19.764939 | orchestrator | 2025-08-29 18:12:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:22.818681 | orchestrator | 2025-08-29 18:12:22 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:22.819291 | orchestrator | 2025-08-29 18:12:22 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:22.820871 | orchestrator | 2025-08-29 18:12:22 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:22.824214 | orchestrator | 2025-08-29 18:12:22 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:22.824270 | orchestrator | 2025-08-29 18:12:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:25.865145 | orchestrator | 2025-08-29 18:12:25 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:25.867681 | orchestrator | 2025-08-29 18:12:25 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:25.869778 | orchestrator | 2025-08-29 18:12:25 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:25.871503 | orchestrator | 2025-08-29 18:12:25 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:25.871590 | orchestrator | 2025-08-29 18:12:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:28.924717 | orchestrator | 2025-08-29 18:12:28 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:28.926224 | orchestrator | 2025-08-29 18:12:28 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:28.927867 | orchestrator | 2025-08-29 18:12:28 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:28.929944 | orchestrator | 2025-08-29 18:12:28 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:28.929963 | orchestrator | 2025-08-29 18:12:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:32.012732 | orchestrator | 2025-08-29 18:12:32 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:32.015060 | orchestrator | 2025-08-29 18:12:32 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:32.017708 | orchestrator | 2025-08-29 18:12:32 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:32.019670 | orchestrator | 2025-08-29 18:12:32 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:32.019706 | orchestrator | 2025-08-29 18:12:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:35.071109 | orchestrator | 2025-08-29 18:12:35 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:35.071214 | orchestrator | 2025-08-29 18:12:35 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:35.072390 | orchestrator | 2025-08-29 18:12:35 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:35.072951 | orchestrator | 2025-08-29 18:12:35 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:35.073424 | orchestrator | 2025-08-29 18:12:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:38.120490 | orchestrator | 2025-08-29 18:12:38 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:38.121735 | orchestrator | 2025-08-29 18:12:38 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:38.122432 | orchestrator | 2025-08-29 18:12:38 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:38.123985 | orchestrator | 2025-08-29 18:12:38 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:38.124020 | orchestrator | 2025-08-29 18:12:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:41.165838 | orchestrator | 2025-08-29 18:12:41 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:41.165937 | orchestrator | 2025-08-29 18:12:41 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:41.165982 | orchestrator | 2025-08-29 18:12:41 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:41.165994 | orchestrator | 2025-08-29 18:12:41 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:41.166005 | orchestrator | 2025-08-29 18:12:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:44.228840 | orchestrator | 2025-08-29 18:12:44 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:44.230468 | orchestrator | 2025-08-29 18:12:44 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:44.232162 | orchestrator | 2025-08-29 18:12:44 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:44.234263 | orchestrator | 2025-08-29 18:12:44 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:44.234673 | orchestrator | 2025-08-29 18:12:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:47.277804 | orchestrator | 2025-08-29 18:12:47 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state STARTED 2025-08-29 18:12:47.279589 | orchestrator | 2025-08-29 18:12:47 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:47.281715 | orchestrator | 2025-08-29 18:12:47 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:47.283841 | orchestrator | 2025-08-29 18:12:47 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:47.283917 | orchestrator | 2025-08-29 18:12:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:50.336376 | orchestrator | 2025-08-29 18:12:50 | INFO  | Task cf204358-b04d-422d-a5f4-265d1c8b27a2 is in state SUCCESS 2025-08-29 18:12:50.338963 | orchestrator | 2025-08-29 18:12:50.339018 | orchestrator | 2025-08-29 18:12:50.339032 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:12:50.339084 | orchestrator | 2025-08-29 18:12:50.339096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:12:50.339107 | orchestrator | Friday 29 August 2025 18:11:02 +0000 (0:00:00.203) 0:00:00.203 ********* 2025-08-29 18:12:50.339118 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.339146 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:12:50.339157 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:12:50.339168 | orchestrator | 2025-08-29 18:12:50.339179 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:12:50.339190 | orchestrator | Friday 29 August 2025 18:11:02 +0000 (0:00:00.350) 0:00:00.553 ********* 2025-08-29 18:12:50.339201 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 18:12:50.339212 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 18:12:50.339223 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 18:12:50.339234 | orchestrator | 2025-08-29 18:12:50.339245 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 18:12:50.339255 | orchestrator | 2025-08-29 18:12:50.339266 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 18:12:50.339277 | orchestrator | Friday 29 August 2025 18:11:03 +0000 (0:00:00.665) 0:00:01.219 ********* 2025-08-29 18:12:50.339288 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.339299 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:12:50.339310 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:12:50.339320 | orchestrator | 2025-08-29 18:12:50.339331 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:12:50.339343 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:12:50.339355 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:12:50.339391 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:12:50.339402 | orchestrator | 2025-08-29 18:12:50.339413 | orchestrator | 2025-08-29 18:12:50.339423 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:12:50.339434 | orchestrator | Friday 29 August 2025 18:11:03 +0000 (0:00:00.686) 0:00:01.905 ********* 2025-08-29 18:12:50.339445 | orchestrator | =============================================================================== 2025-08-29 18:12:50.339455 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.69s 2025-08-29 18:12:50.339466 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-08-29 18:12:50.339477 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-08-29 18:12:50.339487 | orchestrator | 2025-08-29 18:12:50.339498 | orchestrator | 2025-08-29 18:12:50.339509 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:12:50.339521 | orchestrator | 2025-08-29 18:12:50.339533 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 18:12:50.339545 | orchestrator | Friday 29 August 2025 18:03:37 +0000 (0:00:00.353) 0:00:00.353 ********* 2025-08-29 18:12:50.339598 | orchestrator | changed: [testbed-manager] 2025-08-29 18:12:50.339613 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.339626 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.339638 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.339651 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.339663 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.339675 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.339687 | orchestrator | 2025-08-29 18:12:50.339699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:12:50.339711 | orchestrator | Friday 29 August 2025 18:03:38 +0000 (0:00:01.041) 0:00:01.395 ********* 2025-08-29 18:12:50.339724 | orchestrator | changed: [testbed-manager] 2025-08-29 18:12:50.339736 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.339749 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.339761 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.339773 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.339785 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.339798 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.339810 | orchestrator | 2025-08-29 18:12:50.339822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:12:50.339834 | orchestrator | Friday 29 August 2025 18:03:38 +0000 (0:00:00.788) 0:00:02.183 ********* 2025-08-29 18:12:50.339847 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 18:12:50.339859 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 18:12:50.339871 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 18:12:50.339881 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 18:12:50.339892 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 18:12:50.339903 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 18:12:50.339913 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 18:12:50.339924 | orchestrator | 2025-08-29 18:12:50.339935 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 18:12:50.339946 | orchestrator | 2025-08-29 18:12:50.339957 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 18:12:50.339968 | orchestrator | Friday 29 August 2025 18:03:39 +0000 (0:00:00.998) 0:00:03.182 ********* 2025-08-29 18:12:50.339978 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.339989 | orchestrator | 2025-08-29 18:12:50.340000 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 18:12:50.340023 | orchestrator | Friday 29 August 2025 18:03:40 +0000 (0:00:00.903) 0:00:04.085 ********* 2025-08-29 18:12:50.340034 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 18:12:50.340060 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 18:12:50.340072 | orchestrator | 2025-08-29 18:12:50.340083 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 18:12:50.340093 | orchestrator | Friday 29 August 2025 18:03:44 +0000 (0:00:04.034) 0:00:08.119 ********* 2025-08-29 18:12:50.340104 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 18:12:50.340116 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 18:12:50.340127 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340137 | orchestrator | 2025-08-29 18:12:50.340148 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 18:12:50.340159 | orchestrator | Friday 29 August 2025 18:03:49 +0000 (0:00:04.252) 0:00:12.371 ********* 2025-08-29 18:12:50.340170 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340181 | orchestrator | 2025-08-29 18:12:50.340192 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 18:12:50.340203 | orchestrator | Friday 29 August 2025 18:03:50 +0000 (0:00:00.887) 0:00:13.259 ********* 2025-08-29 18:12:50.340214 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340224 | orchestrator | 2025-08-29 18:12:50.340235 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 18:12:50.340246 | orchestrator | Friday 29 August 2025 18:03:51 +0000 (0:00:01.404) 0:00:14.664 ********* 2025-08-29 18:12:50.340256 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340267 | orchestrator | 2025-08-29 18:12:50.340278 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 18:12:50.340289 | orchestrator | Friday 29 August 2025 18:03:55 +0000 (0:00:03.657) 0:00:18.321 ********* 2025-08-29 18:12:50.340300 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.340311 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.340321 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.340332 | orchestrator | 2025-08-29 18:12:50.340343 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 18:12:50.340354 | orchestrator | Friday 29 August 2025 18:03:55 +0000 (0:00:00.348) 0:00:18.670 ********* 2025-08-29 18:12:50.340364 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.340375 | orchestrator | 2025-08-29 18:12:50.340386 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 18:12:50.340397 | orchestrator | Friday 29 August 2025 18:04:21 +0000 (0:00:26.399) 0:00:45.069 ********* 2025-08-29 18:12:50.340408 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340418 | orchestrator | 2025-08-29 18:12:50.340429 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 18:12:50.340440 | orchestrator | Friday 29 August 2025 18:04:35 +0000 (0:00:13.480) 0:00:58.549 ********* 2025-08-29 18:12:50.340451 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.340462 | orchestrator | 2025-08-29 18:12:50.340472 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 18:12:50.340483 | orchestrator | Friday 29 August 2025 18:04:47 +0000 (0:00:11.941) 0:01:10.491 ********* 2025-08-29 18:12:50.340494 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.340505 | orchestrator | 2025-08-29 18:12:50.340516 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 18:12:50.340527 | orchestrator | Friday 29 August 2025 18:04:50 +0000 (0:00:02.716) 0:01:13.208 ********* 2025-08-29 18:12:50.340538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.340548 | orchestrator | 2025-08-29 18:12:50.340581 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 18:12:50.340592 | orchestrator | Friday 29 August 2025 18:04:50 +0000 (0:00:00.734) 0:01:13.942 ********* 2025-08-29 18:12:50.340603 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.340621 | orchestrator | 2025-08-29 18:12:50.340632 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 18:12:50.340642 | orchestrator | Friday 29 August 2025 18:04:51 +0000 (0:00:00.715) 0:01:14.658 ********* 2025-08-29 18:12:50.340653 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.340663 | orchestrator | 2025-08-29 18:12:50.340673 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 18:12:50.340684 | orchestrator | Friday 29 August 2025 18:05:06 +0000 (0:00:15.266) 0:01:29.925 ********* 2025-08-29 18:12:50.340694 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.340705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.340716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.340726 | orchestrator | 2025-08-29 18:12:50.340737 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 18:12:50.340747 | orchestrator | 2025-08-29 18:12:50.340758 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 18:12:50.340768 | orchestrator | Friday 29 August 2025 18:05:07 +0000 (0:00:00.286) 0:01:30.211 ********* 2025-08-29 18:12:50.340779 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.340789 | orchestrator | 2025-08-29 18:12:50.340800 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 18:12:50.340810 | orchestrator | Friday 29 August 2025 18:05:07 +0000 (0:00:00.515) 0:01:30.726 ********* 2025-08-29 18:12:50.340821 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.340831 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.340842 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340852 | orchestrator | 2025-08-29 18:12:50.340862 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 18:12:50.340873 | orchestrator | Friday 29 August 2025 18:05:09 +0000 (0:00:01.949) 0:01:32.676 ********* 2025-08-29 18:12:50.340883 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.340894 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.340904 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.340915 | orchestrator | 2025-08-29 18:12:50.340925 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 18:12:50.340936 | orchestrator | Friday 29 August 2025 18:05:11 +0000 (0:00:02.079) 0:01:34.755 ********* 2025-08-29 18:12:50.340946 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.340957 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.340973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.340984 | orchestrator | 2025-08-29 18:12:50.340995 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 18:12:50.341020 | orchestrator | Friday 29 August 2025 18:05:11 +0000 (0:00:00.357) 0:01:35.113 ********* 2025-08-29 18:12:50.341030 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 18:12:50.341041 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341051 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 18:12:50.341062 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341072 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 18:12:50.341083 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 18:12:50.341094 | orchestrator | 2025-08-29 18:12:50.341104 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 18:12:50.341115 | orchestrator | Friday 29 August 2025 18:05:20 +0000 (0:00:08.232) 0:01:43.345 ********* 2025-08-29 18:12:50.341125 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.341136 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341146 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341157 | orchestrator | 2025-08-29 18:12:50.341167 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 18:12:50.341178 | orchestrator | Friday 29 August 2025 18:05:20 +0000 (0:00:00.556) 0:01:43.901 ********* 2025-08-29 18:12:50.341195 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 18:12:50.341206 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.341217 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 18:12:50.341227 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341237 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 18:12:50.341248 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341258 | orchestrator | 2025-08-29 18:12:50.341269 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 18:12:50.341279 | orchestrator | Friday 29 August 2025 18:05:22 +0000 (0:00:01.625) 0:01:45.527 ********* 2025-08-29 18:12:50.341290 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341300 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.341311 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341321 | orchestrator | 2025-08-29 18:12:50.341332 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 18:12:50.341342 | orchestrator | Friday 29 August 2025 18:05:23 +0000 (0:00:00.750) 0:01:46.278 ********* 2025-08-29 18:12:50.341353 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341374 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.341384 | orchestrator | 2025-08-29 18:12:50.341395 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 18:12:50.341405 | orchestrator | Friday 29 August 2025 18:05:24 +0000 (0:00:00.999) 0:01:47.277 ********* 2025-08-29 18:12:50.341416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341426 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341437 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.341447 | orchestrator | 2025-08-29 18:12:50.341457 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 18:12:50.341468 | orchestrator | Friday 29 August 2025 18:05:26 +0000 (0:00:02.501) 0:01:49.779 ********* 2025-08-29 18:12:50.341479 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341489 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341505 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.341515 | orchestrator | 2025-08-29 18:12:50.341526 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 18:12:50.341537 | orchestrator | Friday 29 August 2025 18:05:47 +0000 (0:00:20.524) 0:02:10.304 ********* 2025-08-29 18:12:50.341548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341611 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341623 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.341634 | orchestrator | 2025-08-29 18:12:50.341644 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 18:12:50.341655 | orchestrator | Friday 29 August 2025 18:05:58 +0000 (0:00:11.361) 0:02:21.666 ********* 2025-08-29 18:12:50.341666 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.341676 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341687 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341698 | orchestrator | 2025-08-29 18:12:50.341709 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 18:12:50.341719 | orchestrator | Friday 29 August 2025 18:05:59 +0000 (0:00:00.924) 0:02:22.590 ********* 2025-08-29 18:12:50.341730 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341740 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341749 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.341758 | orchestrator | 2025-08-29 18:12:50.341768 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 18:12:50.341777 | orchestrator | Friday 29 August 2025 18:06:08 +0000 (0:00:09.439) 0:02:32.029 ********* 2025-08-29 18:12:50.341787 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341796 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.341815 | orchestrator | 2025-08-29 18:12:50.341830 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 18:12:50.341840 | orchestrator | Friday 29 August 2025 18:06:10 +0000 (0:00:01.847) 0:02:33.877 ********* 2025-08-29 18:12:50.341849 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.341859 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.341868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.341878 | orchestrator | 2025-08-29 18:12:50.341887 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 18:12:50.341896 | orchestrator | 2025-08-29 18:12:50.341906 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 18:12:50.341915 | orchestrator | Friday 29 August 2025 18:06:11 +0000 (0:00:00.474) 0:02:34.351 ********* 2025-08-29 18:12:50.341924 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.341934 | orchestrator | 2025-08-29 18:12:50.341950 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 18:12:50.341960 | orchestrator | Friday 29 August 2025 18:06:12 +0000 (0:00:00.926) 0:02:35.278 ********* 2025-08-29 18:12:50.341970 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 18:12:50.341979 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 18:12:50.341988 | orchestrator | 2025-08-29 18:12:50.341998 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 18:12:50.342007 | orchestrator | Friday 29 August 2025 18:06:15 +0000 (0:00:03.037) 0:02:38.316 ********* 2025-08-29 18:12:50.342054 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 18:12:50.342067 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 18:12:50.342077 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 18:12:50.342087 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 18:12:50.342097 | orchestrator | 2025-08-29 18:12:50.342106 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 18:12:50.342115 | orchestrator | Friday 29 August 2025 18:06:21 +0000 (0:00:06.165) 0:02:44.481 ********* 2025-08-29 18:12:50.342125 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:12:50.342134 | orchestrator | 2025-08-29 18:12:50.342144 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 18:12:50.342154 | orchestrator | Friday 29 August 2025 18:06:24 +0000 (0:00:03.255) 0:02:47.737 ********* 2025-08-29 18:12:50.342163 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:12:50.342172 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 18:12:50.342182 | orchestrator | 2025-08-29 18:12:50.342191 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 18:12:50.342201 | orchestrator | Friday 29 August 2025 18:06:28 +0000 (0:00:03.675) 0:02:51.413 ********* 2025-08-29 18:12:50.342210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:12:50.342219 | orchestrator | 2025-08-29 18:12:50.342229 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 18:12:50.342238 | orchestrator | Friday 29 August 2025 18:06:31 +0000 (0:00:03.181) 0:02:54.594 ********* 2025-08-29 18:12:50.342248 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 18:12:50.342257 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 18:12:50.342267 | orchestrator | 2025-08-29 18:12:50.342276 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 18:12:50.342285 | orchestrator | Friday 29 August 2025 18:06:38 +0000 (0:00:07.069) 0:03:01.663 ********* 2025-08-29 18:12:50.342304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.342335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.342348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.342364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.342382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.342392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.342402 | orchestrator | 2025-08-29 18:12:50.342412 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 18:12:50.342422 | orchestrator | Friday 29 August 2025 18:06:39 +0000 (0:00:01.320) 0:03:02.984 ********* 2025-08-29 18:12:50.342432 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.342441 | orchestrator | 2025-08-29 18:12:50.342451 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 18:12:50.342461 | orchestrator | Friday 29 August 2025 18:06:39 +0000 (0:00:00.144) 0:03:03.129 ********* 2025-08-29 18:12:50.342470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.342480 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.342489 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.342499 | orchestrator | 2025-08-29 18:12:50.342509 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 18:12:50.342537 | orchestrator | Friday 29 August 2025 18:06:40 +0000 (0:00:00.566) 0:03:03.696 ********* 2025-08-29 18:12:50.342547 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:12:50.342601 | orchestrator | 2025-08-29 18:12:50.342613 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 18:12:50.342623 | orchestrator | Friday 29 August 2025 18:06:41 +0000 (0:00:00.741) 0:03:04.438 ********* 2025-08-29 18:12:50.342632 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.342642 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.342652 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.342661 | orchestrator | 2025-08-29 18:12:50.342672 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 18:12:50.342686 | orchestrator | Friday 29 August 2025 18:06:41 +0000 (0:00:00.358) 0:03:04.796 ********* 2025-08-29 18:12:50.342701 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.342716 | orchestrator | 2025-08-29 18:12:50.342733 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 18:12:50.342749 | orchestrator | Friday 29 August 2025 18:06:42 +0000 (0:00:00.588) 0:03:05.385 ********* 2025-08-29 18:12:50.342767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.342804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.342837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.342847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.342867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.342875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.342883 | orchestrator | 2025-08-29 18:12:50.342894 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 18:12:50.342902 | orchestrator | Friday 29 August 2025 18:06:44 +0000 (0:00:02.591) 0:03:07.976 ********* 2025-08-29 18:12:50.342911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.342925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.342955 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.342965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.342979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.342987 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.342999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343017 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.343024 | orchestrator | 2025-08-29 18:12:50.343032 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 18:12:50.343044 | orchestrator | Friday 29 August 2025 18:06:45 +0000 (0:00:00.659) 0:03:08.636 ********* 2025-08-29 18:12:50.343053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343076 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.343088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343105 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.343120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343142 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.343150 | orchestrator | 2025-08-29 18:12:50.343158 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 18:12:50.343165 | orchestrator | Friday 29 August 2025 18:06:46 +0000 (0:00:00.869) 0:03:09.505 ********* 2025-08-29 18:12:50.343177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343258 | orchestrator | 2025-08-29 18:12:50.343266 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 18:12:50.343274 | orchestrator | Friday 29 August 2025 18:06:49 +0000 (0:00:02.876) 0:03:12.382 ********* 2025-08-29 18:12:50.343288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343357 | orchestrator | 2025-08-29 18:12:50.343502 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 18:12:50.343512 | orchestrator | Friday 29 August 2025 18:06:58 +0000 (0:00:09.237) 0:03:21.619 ********* 2025-08-29 18:12:50.343521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343542 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.343550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343619 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.343627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 18:12:50.343643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.343651 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.343659 | orchestrator | 2025-08-29 18:12:50.343667 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 18:12:50.343675 | orchestrator | Friday 29 August 2025 18:06:59 +0000 (0:00:01.553) 0:03:23.173 ********* 2025-08-29 18:12:50.343683 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.343691 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.343698 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.343706 | orchestrator | 2025-08-29 18:12:50.343714 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 18:12:50.343721 | orchestrator | Friday 29 August 2025 18:07:02 +0000 (0:00:02.494) 0:03:25.668 ********* 2025-08-29 18:12:50.343729 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.343737 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.343745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.343752 | orchestrator | 2025-08-29 18:12:50.343760 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 18:12:50.343768 | orchestrator | Friday 29 August 2025 18:07:03 +0000 (0:00:00.595) 0:03:26.264 ********* 2025-08-29 18:12:50.343788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 18:12:50.343819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.343855 | orchestrator | 2025-08-29 18:12:50.343863 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 18:12:50.343883 | orchestrator | Friday 29 August 2025 18:07:05 +0000 (0:00:01.989) 0:03:28.254 ********* 2025-08-29 18:12:50.343891 | orchestrator | 2025-08-29 18:12:50.343899 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 18:12:50.343907 | orchestrator | Friday 29 August 2025 18:07:05 +0000 (0:00:00.384) 0:03:28.638 ********* 2025-08-29 18:12:50.343914 | orchestrator | 2025-08-29 18:12:50.343922 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 18:12:50.343930 | orchestrator | Friday 29 August 2025 18:07:05 +0000 (0:00:00.294) 0:03:28.932 ********* 2025-08-29 18:12:50.343937 | orchestrator | 2025-08-29 18:12:50.343945 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 18:12:50.343953 | orchestrator | Friday 29 August 2025 18:07:06 +0000 (0:00:00.390) 0:03:29.323 ********* 2025-08-29 18:12:50.343960 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.343968 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.343976 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.343983 | orchestrator | 2025-08-29 18:12:50.343991 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 18:12:50.343999 | orchestrator | Friday 29 August 2025 18:07:26 +0000 (0:00:20.576) 0:03:49.900 ********* 2025-08-29 18:12:50.344006 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.344014 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.344022 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.344038 | orchestrator | 2025-08-29 18:12:50.344046 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 18:12:50.344053 | orchestrator | 2025-08-29 18:12:50.344061 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 18:12:50.344069 | orchestrator | Friday 29 August 2025 18:07:37 +0000 (0:00:10.405) 0:04:00.305 ********* 2025-08-29 18:12:50.344076 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.344084 | orchestrator | 2025-08-29 18:12:50.344092 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 18:12:50.344100 | orchestrator | Friday 29 August 2025 18:07:38 +0000 (0:00:01.876) 0:04:02.181 ********* 2025-08-29 18:12:50.344113 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.344121 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.344129 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.344136 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.344148 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.344156 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.344163 | orchestrator | 2025-08-29 18:12:50.344171 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 18:12:50.344179 | orchestrator | Friday 29 August 2025 18:07:41 +0000 (0:00:02.392) 0:04:04.574 ********* 2025-08-29 18:12:50.344187 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.344194 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.344202 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.344210 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:12:50.344217 | orchestrator | 2025-08-29 18:12:50.344225 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 18:12:50.344233 | orchestrator | Friday 29 August 2025 18:07:42 +0000 (0:00:01.052) 0:04:05.626 ********* 2025-08-29 18:12:50.344241 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 18:12:50.344249 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 18:12:50.344256 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 18:12:50.344264 | orchestrator | 2025-08-29 18:12:50.344272 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 18:12:50.344280 | orchestrator | Friday 29 August 2025 18:07:44 +0000 (0:00:01.967) 0:04:07.594 ********* 2025-08-29 18:12:50.344287 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 18:12:50.344295 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 18:12:50.344303 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 18:12:50.344310 | orchestrator | 2025-08-29 18:12:50.344318 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 18:12:50.344326 | orchestrator | Friday 29 August 2025 18:07:46 +0000 (0:00:01.812) 0:04:09.406 ********* 2025-08-29 18:12:50.344334 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 18:12:50.344341 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.344349 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 18:12:50.344357 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.344365 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 18:12:50.344372 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.344380 | orchestrator | 2025-08-29 18:12:50.344388 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 18:12:50.344395 | orchestrator | Friday 29 August 2025 18:07:46 +0000 (0:00:00.597) 0:04:10.004 ********* 2025-08-29 18:12:50.344403 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 18:12:50.344416 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 18:12:50.344424 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 18:12:50.344431 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 18:12:50.344439 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.344447 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 18:12:50.344454 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 18:12:50.344462 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 18:12:50.344470 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 18:12:50.344478 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.344486 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 18:12:50.344498 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 18:12:50.344506 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 18:12:50.344514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.344522 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 18:12:50.344529 | orchestrator | 2025-08-29 18:12:50.344537 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 18:12:50.344545 | orchestrator | Friday 29 August 2025 18:07:48 +0000 (0:00:01.508) 0:04:11.512 ********* 2025-08-29 18:12:50.344553 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.344584 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.344592 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.344600 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.344607 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.344615 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.344623 | orchestrator | 2025-08-29 18:12:50.344630 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 18:12:50.344638 | orchestrator | Friday 29 August 2025 18:07:50 +0000 (0:00:01.718) 0:04:13.231 ********* 2025-08-29 18:12:50.344646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.344653 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.344661 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.344669 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.344676 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.344684 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.344692 | orchestrator | 2025-08-29 18:12:50.344699 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 18:12:50.344707 | orchestrator | Friday 29 August 2025 18:07:52 +0000 (0:00:02.860) 0:04:16.091 ********* 2025-08-29 18:12:50.344720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344889 | orchestrator | 2025-08-29 18:12:50.344897 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 18:12:50.344905 | orchestrator | Friday 29 August 2025 18:07:57 +0000 (0:00:04.148) 0:04:20.240 ********* 2025-08-29 18:12:50.344913 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:50.344922 | orchestrator | 2025-08-29 18:12:50.344929 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 18:12:50.344937 | orchestrator | Friday 29 August 2025 18:07:59 +0000 (0:00:02.202) 0:04:22.442 ********* 2025-08-29 18:12:50.344945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.344992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.345109 | orchestrator | 2025-08-29 18:12:50.345117 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 18:12:50.345125 | orchestrator | Friday 29 August 2025 18:08:05 +0000 (0:00:05.883) 0:04:28.326 ********* 2025-08-29 18:12:50.345685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.345708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.345717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.345726 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.345734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.345749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.345792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.345802 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.345810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.345819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.345827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.345835 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.345847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.345861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.345869 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.345900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.345909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.345918 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.345926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.345934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.345942 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.345950 | orchestrator | 2025-08-29 18:12:50.345959 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 18:12:50.345967 | orchestrator | Friday 29 August 2025 18:08:07 +0000 (0:00:02.745) 0:04:31.071 ********* 2025-08-29 18:12:50.345980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.345994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.346049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.346061 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.346069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.346078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.346090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.346104 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.346112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.346120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.346128 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.346155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.346163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.346170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.346182 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.346193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.346200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.346206 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.346230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.346238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.346245 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.346252 | orchestrator | 2025-08-29 18:12:50.346258 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 18:12:50.346265 | orchestrator | Friday 29 August 2025 18:08:11 +0000 (0:00:03.395) 0:04:34.467 ********* 2025-08-29 18:12:50.346272 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.346279 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.346287 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.346294 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 18:12:50.346303 | orchestrator | 2025-08-29 18:12:50.346310 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 18:12:50.346318 | orchestrator | Friday 29 August 2025 18:08:12 +0000 (0:00:01.703) 0:04:36.170 ********* 2025-08-29 18:12:50.346325 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 18:12:50.346333 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 18:12:50.346340 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 18:12:50.346348 | orchestrator | 2025-08-29 18:12:50.346355 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 18:12:50.346362 | orchestrator | Friday 29 August 2025 18:08:14 +0000 (0:00:01.742) 0:04:37.912 ********* 2025-08-29 18:12:50.346375 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 18:12:50.346383 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 18:12:50.346390 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 18:12:50.346397 | orchestrator | 2025-08-29 18:12:50.346405 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 18:12:50.346412 | orchestrator | Friday 29 August 2025 18:08:16 +0000 (0:00:01.764) 0:04:39.677 ********* 2025-08-29 18:12:50.346420 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:12:50.346427 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:12:50.346435 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:12:50.346442 | orchestrator | 2025-08-29 18:12:50.346450 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 18:12:50.346457 | orchestrator | Friday 29 August 2025 18:08:17 +0000 (0:00:00.804) 0:04:40.482 ********* 2025-08-29 18:12:50.346465 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:12:50.346472 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:12:50.346480 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:12:50.346487 | orchestrator | 2025-08-29 18:12:50.346494 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 18:12:50.346501 | orchestrator | Friday 29 August 2025 18:08:17 +0000 (0:00:00.512) 0:04:40.995 ********* 2025-08-29 18:12:50.346508 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 18:12:50.346516 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 18:12:50.346523 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 18:12:50.346531 | orchestrator | 2025-08-29 18:12:50.346538 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 18:12:50.346546 | orchestrator | Friday 29 August 2025 18:08:19 +0000 (0:00:01.590) 0:04:42.585 ********* 2025-08-29 18:12:50.346636 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 18:12:50.346647 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 18:12:50.346655 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 18:12:50.346662 | orchestrator | 2025-08-29 18:12:50.346669 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 18:12:50.346676 | orchestrator | Friday 29 August 2025 18:08:21 +0000 (0:00:01.737) 0:04:44.323 ********* 2025-08-29 18:12:50.346683 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 18:12:50.346689 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 18:12:50.346696 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 18:12:50.346702 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 18:12:50.346709 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 18:12:50.346715 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 18:12:50.346722 | orchestrator | 2025-08-29 18:12:50.346728 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 18:12:50.346735 | orchestrator | Friday 29 August 2025 18:08:28 +0000 (0:00:07.005) 0:04:51.328 ********* 2025-08-29 18:12:50.346742 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.346748 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.346755 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.346761 | orchestrator | 2025-08-29 18:12:50.346768 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 18:12:50.346774 | orchestrator | Friday 29 August 2025 18:08:28 +0000 (0:00:00.564) 0:04:51.893 ********* 2025-08-29 18:12:50.346781 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.346787 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.346794 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.346800 | orchestrator | 2025-08-29 18:12:50.346807 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 18:12:50.346836 | orchestrator | Friday 29 August 2025 18:08:29 +0000 (0:00:00.474) 0:04:52.367 ********* 2025-08-29 18:12:50.346844 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.346855 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.346862 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.346868 | orchestrator | 2025-08-29 18:12:50.346875 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 18:12:50.346882 | orchestrator | Friday 29 August 2025 18:08:31 +0000 (0:00:02.451) 0:04:54.818 ********* 2025-08-29 18:12:50.346889 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 18:12:50.346896 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 18:12:50.346902 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 18:12:50.346909 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 18:12:50.346916 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 18:12:50.346922 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 18:12:50.346929 | orchestrator | 2025-08-29 18:12:50.346935 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 18:12:50.346942 | orchestrator | Friday 29 August 2025 18:08:38 +0000 (0:00:06.500) 0:05:01.319 ********* 2025-08-29 18:12:50.346949 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 18:12:50.346955 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 18:12:50.346962 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 18:12:50.346968 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 18:12:50.346975 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.346981 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 18:12:50.346988 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.346994 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 18:12:50.347001 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.347008 | orchestrator | 2025-08-29 18:12:50.347014 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 18:12:50.347021 | orchestrator | Friday 29 August 2025 18:08:43 +0000 (0:00:05.285) 0:05:06.605 ********* 2025-08-29 18:12:50.347027 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.347034 | orchestrator | 2025-08-29 18:12:50.347041 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 18:12:50.347047 | orchestrator | Friday 29 August 2025 18:08:43 +0000 (0:00:00.177) 0:05:06.783 ********* 2025-08-29 18:12:50.347054 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.347060 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.347067 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.347073 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.347080 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.347086 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.347093 | orchestrator | 2025-08-29 18:12:50.347099 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 18:12:50.347106 | orchestrator | Friday 29 August 2025 18:08:45 +0000 (0:00:01.483) 0:05:08.267 ********* 2025-08-29 18:12:50.347112 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 18:12:50.347119 | orchestrator | 2025-08-29 18:12:50.347125 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 18:12:50.347135 | orchestrator | Friday 29 August 2025 18:08:45 +0000 (0:00:00.709) 0:05:08.977 ********* 2025-08-29 18:12:50.347142 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.347149 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.347161 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.347167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.347174 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.347180 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.347187 | orchestrator | 2025-08-29 18:12:50.347193 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 18:12:50.347200 | orchestrator | Friday 29 August 2025 18:08:47 +0000 (0:00:01.268) 0:05:10.245 ********* 2025-08-29 18:12:50.347207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347359 | orchestrator | 2025-08-29 18:12:50.347366 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 18:12:50.347373 | orchestrator | Friday 29 August 2025 18:08:53 +0000 (0:00:06.083) 0:05:16.329 ********* 2025-08-29 18:12:50.347380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.347394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.347402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.347413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.347420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.347427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.347434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347448 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.347516 | orchestrator | 2025-08-29 18:12:50.347522 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 18:12:50.347529 | orchestrator | Friday 29 August 2025 18:09:02 +0000 (0:00:09.769) 0:05:26.099 ********* 2025-08-29 18:12:50.347536 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.347542 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.347552 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.347579 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.347590 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.347600 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.347606 | orchestrator | 2025-08-29 18:12:50.347613 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 18:12:50.347619 | orchestrator | Friday 29 August 2025 18:09:05 +0000 (0:00:02.465) 0:05:28.565 ********* 2025-08-29 18:12:50.347626 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 18:12:50.347632 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 18:12:50.347639 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 18:12:50.347645 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 18:12:50.347652 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 18:12:50.347658 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 18:12:50.347665 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 18:12:50.347671 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.347678 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 18:12:50.347689 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.347696 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 18:12:50.347702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.347709 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 18:12:50.347716 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 18:12:50.347722 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 18:12:50.347729 | orchestrator | 2025-08-29 18:12:50.347735 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 18:12:50.347742 | orchestrator | Friday 29 August 2025 18:09:10 +0000 (0:00:05.229) 0:05:33.795 ********* 2025-08-29 18:12:50.347748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.347754 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.347761 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.347767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.347774 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.347780 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.347787 | orchestrator | 2025-08-29 18:12:50.347793 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 18:12:50.347800 | orchestrator | Friday 29 August 2025 18:09:11 +0000 (0:00:00.697) 0:05:34.492 ********* 2025-08-29 18:12:50.347806 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 18:12:50.347813 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 18:12:50.347820 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 18:12:50.347826 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 18:12:50.347838 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 18:12:50.347845 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 18:12:50.347852 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 18:12:50.347858 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 18:12:50.347865 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 18:12:50.347871 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.347878 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 18:12:50.347884 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 18:12:50.347891 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.347897 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 18:12:50.347904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.347911 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 18:12:50.347917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 18:12:50.347923 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 18:12:50.347934 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 18:12:50.347945 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 18:12:50.347951 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 18:12:50.347958 | orchestrator | 2025-08-29 18:12:50.347964 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 18:12:50.347971 | orchestrator | Friday 29 August 2025 18:09:17 +0000 (0:00:06.133) 0:05:40.625 ********* 2025-08-29 18:12:50.347977 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 18:12:50.347984 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 18:12:50.347990 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 18:12:50.347997 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 18:12:50.348003 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 18:12:50.348010 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 18:12:50.348016 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 18:12:50.348022 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 18:12:50.348029 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 18:12:50.348035 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 18:12:50.348042 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 18:12:50.348048 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 18:12:50.348055 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348061 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 18:12:50.348067 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 18:12:50.348074 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 18:12:50.348080 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348087 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 18:12:50.348093 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 18:12:50.348100 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348106 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 18:12:50.348113 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 18:12:50.348119 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 18:12:50.348126 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 18:12:50.348132 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 18:12:50.348138 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 18:12:50.348148 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 18:12:50.348155 | orchestrator | 2025-08-29 18:12:50.348161 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 18:12:50.348167 | orchestrator | Friday 29 August 2025 18:09:25 +0000 (0:00:08.095) 0:05:48.721 ********* 2025-08-29 18:12:50.348174 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.348180 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.348191 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.348197 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348204 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348216 | orchestrator | 2025-08-29 18:12:50.348223 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 18:12:50.348229 | orchestrator | Friday 29 August 2025 18:09:26 +0000 (0:00:00.618) 0:05:49.339 ********* 2025-08-29 18:12:50.348235 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.348242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.348248 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.348254 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348261 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348267 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348274 | orchestrator | 2025-08-29 18:12:50.348280 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 18:12:50.348287 | orchestrator | Friday 29 August 2025 18:09:27 +0000 (0:00:00.884) 0:05:50.224 ********* 2025-08-29 18:12:50.348293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348300 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348306 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348313 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.348319 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.348325 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.348331 | orchestrator | 2025-08-29 18:12:50.348338 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 18:12:50.348348 | orchestrator | Friday 29 August 2025 18:09:29 +0000 (0:00:02.185) 0:05:52.409 ********* 2025-08-29 18:12:50.348355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.348362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.348369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.348385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.348392 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.348399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.348410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.348417 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.348424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 18:12:50.348430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 18:12:50.348447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.348454 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.348460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.348470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.348477 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.348491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.348498 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 18:12:50.348516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:50.348522 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348529 | orchestrator | 2025-08-29 18:12:50.348536 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 18:12:50.348546 | orchestrator | Friday 29 August 2025 18:09:31 +0000 (0:00:02.079) 0:05:54.489 ********* 2025-08-29 18:12:50.348552 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 18:12:50.348595 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 18:12:50.348601 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.348608 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 18:12:50.348614 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 18:12:50.348621 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.348627 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 18:12:50.348634 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 18:12:50.348640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.348647 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 18:12:50.348653 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 18:12:50.348660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348666 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 18:12:50.348672 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 18:12:50.348679 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348685 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 18:12:50.348692 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 18:12:50.348698 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348705 | orchestrator | 2025-08-29 18:12:50.348711 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 18:12:50.348718 | orchestrator | Friday 29 August 2025 18:09:32 +0000 (0:00:00.746) 0:05:55.236 ********* 2025-08-29 18:12:50.348729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:50.348863 | orchestrator | 2025-08-29 18:12:50.348870 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 18:12:50.348877 | orchestrator | Friday 29 August 2025 18:09:35 +0000 (0:00:03.201) 0:05:58.437 ********* 2025-08-29 18:12:50.348883 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.348890 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.348896 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.348903 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.348909 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.348916 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.348922 | orchestrator | 2025-08-29 18:12:50.348929 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 18:12:50.348935 | orchestrator | Friday 29 August 2025 18:09:35 +0000 (0:00:00.723) 0:05:59.160 ********* 2025-08-29 18:12:50.348942 | orchestrator | 2025-08-29 18:12:50.348954 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 18:12:50.348960 | orchestrator | Friday 29 August 2025 18:09:36 +0000 (0:00:00.154) 0:05:59.315 ********* 2025-08-29 18:12:50.348967 | orchestrator | 2025-08-29 18:12:50.348973 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 18:12:50.348980 | orchestrator | Friday 29 August 2025 18:09:36 +0000 (0:00:00.135) 0:05:59.450 ********* 2025-08-29 18:12:50.348986 | orchestrator | 2025-08-29 18:12:50.348992 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 18:12:50.348998 | orchestrator | Friday 29 August 2025 18:09:36 +0000 (0:00:00.369) 0:05:59.819 ********* 2025-08-29 18:12:50.349004 | orchestrator | 2025-08-29 18:12:50.349010 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 18:12:50.349016 | orchestrator | Friday 29 August 2025 18:09:36 +0000 (0:00:00.131) 0:05:59.951 ********* 2025-08-29 18:12:50.349022 | orchestrator | 2025-08-29 18:12:50.349028 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 18:12:50.349034 | orchestrator | Friday 29 August 2025 18:09:36 +0000 (0:00:00.183) 0:06:00.135 ********* 2025-08-29 18:12:50.349040 | orchestrator | 2025-08-29 18:12:50.349046 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 18:12:50.349052 | orchestrator | Friday 29 August 2025 18:09:37 +0000 (0:00:00.168) 0:06:00.303 ********* 2025-08-29 18:12:50.349058 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.349069 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.349075 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.349081 | orchestrator | 2025-08-29 18:12:50.349087 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 18:12:50.349093 | orchestrator | Friday 29 August 2025 18:09:49 +0000 (0:00:12.794) 0:06:13.097 ********* 2025-08-29 18:12:50.349099 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.349105 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.349111 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.349118 | orchestrator | 2025-08-29 18:12:50.349127 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 18:12:50.349133 | orchestrator | Friday 29 August 2025 18:10:09 +0000 (0:00:19.540) 0:06:32.638 ********* 2025-08-29 18:12:50.349140 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.349146 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.349152 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.349158 | orchestrator | 2025-08-29 18:12:50.349164 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 18:12:50.349170 | orchestrator | Friday 29 August 2025 18:10:30 +0000 (0:00:21.382) 0:06:54.020 ********* 2025-08-29 18:12:50.349176 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.349182 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.349188 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.349194 | orchestrator | 2025-08-29 18:12:50.349200 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 18:12:50.349206 | orchestrator | Friday 29 August 2025 18:11:07 +0000 (0:00:36.940) 0:07:30.961 ********* 2025-08-29 18:12:50.349212 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.349218 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.349224 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-08-29 18:12:50.349230 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.349236 | orchestrator | 2025-08-29 18:12:50.349242 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 18:12:50.349248 | orchestrator | Friday 29 August 2025 18:11:14 +0000 (0:00:06.311) 0:07:37.272 ********* 2025-08-29 18:12:50.349254 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.349260 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.349267 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.349273 | orchestrator | 2025-08-29 18:12:50.349279 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 18:12:50.349285 | orchestrator | Friday 29 August 2025 18:11:15 +0000 (0:00:00.974) 0:07:38.247 ********* 2025-08-29 18:12:50.349291 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:12:50.349297 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:12:50.349303 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:12:50.349309 | orchestrator | 2025-08-29 18:12:50.349315 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 18:12:50.349321 | orchestrator | Friday 29 August 2025 18:11:40 +0000 (0:00:25.485) 0:08:03.732 ********* 2025-08-29 18:12:50.349327 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.349333 | orchestrator | 2025-08-29 18:12:50.349339 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 18:12:50.349345 | orchestrator | Friday 29 August 2025 18:11:40 +0000 (0:00:00.199) 0:08:03.932 ********* 2025-08-29 18:12:50.349351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.349357 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.349363 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.349369 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.349375 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.349381 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 18:12:50.349387 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 18:12:50.349397 | orchestrator | 2025-08-29 18:12:50.349403 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 18:12:50.349409 | orchestrator | Friday 29 August 2025 18:12:04 +0000 (0:00:23.401) 0:08:27.333 ********* 2025-08-29 18:12:50.349415 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.349421 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.349427 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.349433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.349439 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.349445 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.349451 | orchestrator | 2025-08-29 18:12:50.349457 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 18:12:50.349467 | orchestrator | Friday 29 August 2025 18:12:12 +0000 (0:00:08.463) 0:08:35.796 ********* 2025-08-29 18:12:50.349473 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.349479 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.349485 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.349491 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.349497 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.349503 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-08-29 18:12:50.349509 | orchestrator | 2025-08-29 18:12:50.349515 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 18:12:50.349521 | orchestrator | Friday 29 August 2025 18:12:16 +0000 (0:00:03.661) 0:08:39.458 ********* 2025-08-29 18:12:50.349527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 18:12:50.349533 | orchestrator | 2025-08-29 18:12:50.349539 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 18:12:50.349546 | orchestrator | Friday 29 August 2025 18:12:28 +0000 (0:00:11.857) 0:08:51.315 ********* 2025-08-29 18:12:50.349551 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 18:12:50.349573 | orchestrator | 2025-08-29 18:12:50.349579 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 18:12:50.349585 | orchestrator | Friday 29 August 2025 18:12:29 +0000 (0:00:01.293) 0:08:52.609 ********* 2025-08-29 18:12:50.349591 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.349598 | orchestrator | 2025-08-29 18:12:50.349603 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 18:12:50.349609 | orchestrator | Friday 29 August 2025 18:12:30 +0000 (0:00:01.403) 0:08:54.013 ********* 2025-08-29 18:12:50.349615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 18:12:50.349622 | orchestrator | 2025-08-29 18:12:50.349628 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 18:12:50.349637 | orchestrator | Friday 29 August 2025 18:12:40 +0000 (0:00:09.994) 0:09:04.007 ********* 2025-08-29 18:12:50.349643 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:12:50.349649 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:12:50.349655 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:12:50.349661 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:50.349667 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:12:50.349673 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:12:50.349679 | orchestrator | 2025-08-29 18:12:50.349685 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 18:12:50.349691 | orchestrator | 2025-08-29 18:12:50.349697 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 18:12:50.349704 | orchestrator | Friday 29 August 2025 18:12:42 +0000 (0:00:02.145) 0:09:06.153 ********* 2025-08-29 18:12:50.349710 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:50.349716 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:50.349722 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:50.349728 | orchestrator | 2025-08-29 18:12:50.349734 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 18:12:50.349744 | orchestrator | 2025-08-29 18:12:50.349750 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 18:12:50.349756 | orchestrator | Friday 29 August 2025 18:12:43 +0000 (0:00:01.011) 0:09:07.164 ********* 2025-08-29 18:12:50.349763 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.349769 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.349775 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.349781 | orchestrator | 2025-08-29 18:12:50.349787 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 18:12:50.349793 | orchestrator | 2025-08-29 18:12:50.349799 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 18:12:50.349805 | orchestrator | Friday 29 August 2025 18:12:44 +0000 (0:00:00.726) 0:09:07.890 ********* 2025-08-29 18:12:50.349811 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 18:12:50.349817 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 18:12:50.349823 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 18:12:50.349829 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 18:12:50.349836 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 18:12:50.349842 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 18:12:50.349848 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:12:50.349854 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 18:12:50.349860 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 18:12:50.349866 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 18:12:50.349872 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 18:12:50.349878 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 18:12:50.349884 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 18:12:50.349891 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:12:50.349897 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 18:12:50.349903 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 18:12:50.349909 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 18:12:50.349915 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 18:12:50.349921 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 18:12:50.349927 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 18:12:50.349933 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:12:50.349939 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 18:12:50.349945 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 18:12:50.349951 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 18:12:50.349960 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 18:12:50.349967 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 18:12:50.349973 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 18:12:50.349979 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.349985 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 18:12:50.349991 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 18:12:50.349997 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 18:12:50.350003 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 18:12:50.350009 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 18:12:50.350033 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 18:12:50.350041 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.350047 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 18:12:50.350057 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 18:12:50.350063 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 18:12:50.350069 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 18:12:50.350075 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 18:12:50.350081 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 18:12:50.350087 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.350093 | orchestrator | 2025-08-29 18:12:50.350099 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 18:12:50.350105 | orchestrator | 2025-08-29 18:12:50.350112 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 18:12:50.350118 | orchestrator | Friday 29 August 2025 18:12:46 +0000 (0:00:01.419) 0:09:09.309 ********* 2025-08-29 18:12:50.350124 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 18:12:50.350133 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 18:12:50.350139 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.350145 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 18:12:50.350151 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 18:12:50.350157 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.350163 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 18:12:50.350170 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 18:12:50.350176 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.350182 | orchestrator | 2025-08-29 18:12:50.350188 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 18:12:50.350194 | orchestrator | 2025-08-29 18:12:50.350200 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 18:12:50.350206 | orchestrator | Friday 29 August 2025 18:12:46 +0000 (0:00:00.506) 0:09:09.816 ********* 2025-08-29 18:12:50.350212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.350218 | orchestrator | 2025-08-29 18:12:50.350224 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 18:12:50.350230 | orchestrator | 2025-08-29 18:12:50.350236 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 18:12:50.350242 | orchestrator | Friday 29 August 2025 18:12:47 +0000 (0:00:00.872) 0:09:10.688 ********* 2025-08-29 18:12:50.350248 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:50.350255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:50.350261 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:50.350267 | orchestrator | 2025-08-29 18:12:50.350273 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:12:50.350279 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:12:50.350285 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 18:12:50.350292 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 18:12:50.350298 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 18:12:50.350304 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 18:12:50.350310 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 18:12:50.350316 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 18:12:50.350327 | orchestrator | 2025-08-29 18:12:50.350333 | orchestrator | 2025-08-29 18:12:50.350339 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:12:50.350345 | orchestrator | Friday 29 August 2025 18:12:47 +0000 (0:00:00.423) 0:09:11.112 ********* 2025-08-29 18:12:50.350351 | orchestrator | =============================================================================== 2025-08-29 18:12:50.350357 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.94s 2025-08-29 18:12:50.350363 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.40s 2025-08-29 18:12:50.350373 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.49s 2025-08-29 18:12:50.350379 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.40s 2025-08-29 18:12:50.350385 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.38s 2025-08-29 18:12:50.350391 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.58s 2025-08-29 18:12:50.350397 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.52s 2025-08-29 18:12:50.350403 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.54s 2025-08-29 18:12:50.350409 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.27s 2025-08-29 18:12:50.350415 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.48s 2025-08-29 18:12:50.350421 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.79s 2025-08-29 18:12:50.350427 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.94s 2025-08-29 18:12:50.350433 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.86s 2025-08-29 18:12:50.350439 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.36s 2025-08-29 18:12:50.350445 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.41s 2025-08-29 18:12:50.350451 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.99s 2025-08-29 18:12:50.350457 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 9.77s 2025-08-29 18:12:50.350463 | orchestrator | nova-cell : Create cell ------------------------------------------------- 9.44s 2025-08-29 18:12:50.350469 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.24s 2025-08-29 18:12:50.350475 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.46s 2025-08-29 18:12:50.350484 | orchestrator | 2025-08-29 18:12:50 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:50.350490 | orchestrator | 2025-08-29 18:12:50 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:50.350496 | orchestrator | 2025-08-29 18:12:50 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state STARTED 2025-08-29 18:12:50.350502 | orchestrator | 2025-08-29 18:12:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:53.387233 | orchestrator | 2025-08-29 18:12:53 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:53.390405 | orchestrator | 2025-08-29 18:12:53 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:53.395400 | orchestrator | 2025-08-29 18:12:53.395444 | orchestrator | 2025-08-29 18:12:53.395456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:12:53.395469 | orchestrator | 2025-08-29 18:12:53.395480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:12:53.395492 | orchestrator | Friday 29 August 2025 18:10:57 +0000 (0:00:00.292) 0:00:00.292 ********* 2025-08-29 18:12:53.395503 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:53.395516 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:12:53.395550 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:12:53.395608 | orchestrator | 2025-08-29 18:12:53.395620 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:12:53.395631 | orchestrator | Friday 29 August 2025 18:10:57 +0000 (0:00:00.302) 0:00:00.595 ********* 2025-08-29 18:12:53.395736 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 18:12:53.395749 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 18:12:53.395760 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 18:12:53.395770 | orchestrator | 2025-08-29 18:12:53.395781 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 18:12:53.395792 | orchestrator | 2025-08-29 18:12:53.395802 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 18:12:53.395813 | orchestrator | Friday 29 August 2025 18:10:57 +0000 (0:00:00.436) 0:00:01.031 ********* 2025-08-29 18:12:53.395824 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:53.395836 | orchestrator | 2025-08-29 18:12:53.396087 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 18:12:53.396099 | orchestrator | Friday 29 August 2025 18:10:58 +0000 (0:00:00.596) 0:00:01.628 ********* 2025-08-29 18:12:53.396110 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 18:12:53.396121 | orchestrator | 2025-08-29 18:12:53.396131 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 18:12:53.396142 | orchestrator | Friday 29 August 2025 18:11:01 +0000 (0:00:03.243) 0:00:04.871 ********* 2025-08-29 18:12:53.396152 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 18:12:53.396163 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 18:12:53.396174 | orchestrator | 2025-08-29 18:12:53.396185 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 18:12:53.396195 | orchestrator | Friday 29 August 2025 18:11:07 +0000 (0:00:06.195) 0:00:11.066 ********* 2025-08-29 18:12:53.396206 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:12:53.396217 | orchestrator | 2025-08-29 18:12:53.396227 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 18:12:53.396255 | orchestrator | Friday 29 August 2025 18:11:10 +0000 (0:00:03.017) 0:00:14.084 ********* 2025-08-29 18:12:53.396266 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:12:53.396277 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 18:12:53.396287 | orchestrator | 2025-08-29 18:12:53.396298 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 18:12:53.396309 | orchestrator | Friday 29 August 2025 18:11:14 +0000 (0:00:03.741) 0:00:17.825 ********* 2025-08-29 18:12:53.396319 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:12:53.396330 | orchestrator | 2025-08-29 18:12:53.396340 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 18:12:53.396351 | orchestrator | Friday 29 August 2025 18:11:17 +0000 (0:00:03.297) 0:00:21.123 ********* 2025-08-29 18:12:53.396362 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 18:12:53.396372 | orchestrator | 2025-08-29 18:12:53.396383 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 18:12:53.396393 | orchestrator | Friday 29 August 2025 18:11:21 +0000 (0:00:03.801) 0:00:24.924 ********* 2025-08-29 18:12:53.396404 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.396415 | orchestrator | 2025-08-29 18:12:53.396425 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 18:12:53.396436 | orchestrator | Friday 29 August 2025 18:11:24 +0000 (0:00:02.967) 0:00:27.892 ********* 2025-08-29 18:12:53.396447 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.396457 | orchestrator | 2025-08-29 18:12:53.396479 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 18:12:53.396490 | orchestrator | Friday 29 August 2025 18:11:28 +0000 (0:00:03.608) 0:00:31.500 ********* 2025-08-29 18:12:53.396501 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.396511 | orchestrator | 2025-08-29 18:12:53.396522 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 18:12:53.396533 | orchestrator | Friday 29 August 2025 18:11:31 +0000 (0:00:03.464) 0:00:34.965 ********* 2025-08-29 18:12:53.396580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.396597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.396615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.396628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.396648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.396670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.396681 | orchestrator | 2025-08-29 18:12:53.396694 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 18:12:53.396707 | orchestrator | Friday 29 August 2025 18:11:33 +0000 (0:00:01.317) 0:00:36.283 ********* 2025-08-29 18:12:53.396719 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:53.396730 | orchestrator | 2025-08-29 18:12:53.396743 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 18:12:53.396755 | orchestrator | Friday 29 August 2025 18:11:33 +0000 (0:00:00.107) 0:00:36.390 ********* 2025-08-29 18:12:53.396767 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:53.396781 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:53.396793 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:53.396805 | orchestrator | 2025-08-29 18:12:53.396817 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 18:12:53.396829 | orchestrator | Friday 29 August 2025 18:11:33 +0000 (0:00:00.489) 0:00:36.880 ********* 2025-08-29 18:12:53.396842 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:12:53.396854 | orchestrator | 2025-08-29 18:12:53.396866 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 18:12:53.396877 | orchestrator | Friday 29 August 2025 18:11:34 +0000 (0:00:00.877) 0:00:37.757 ********* 2025-08-29 18:12:53.396890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.396908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.396935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.396957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.396971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.396983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.396996 | orchestrator | 2025-08-29 18:12:53.397008 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 18:12:53.397032 | orchestrator | Friday 29 August 2025 18:11:36 +0000 (0:00:02.430) 0:00:40.187 ********* 2025-08-29 18:12:53.397045 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:12:53.397055 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:12:53.397066 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:12:53.397077 | orchestrator | 2025-08-29 18:12:53.397095 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 18:12:53.397114 | orchestrator | Friday 29 August 2025 18:11:37 +0000 (0:00:00.299) 0:00:40.487 ********* 2025-08-29 18:12:53.397133 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:12:53.397153 | orchestrator | 2025-08-29 18:12:53.397172 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 18:12:53.397191 | orchestrator | Friday 29 August 2025 18:11:37 +0000 (0:00:00.692) 0:00:41.179 ********* 2025-08-29 18:12:53.397211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397365 | orchestrator | 2025-08-29 18:12:53.397383 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 18:12:53.397394 | orchestrator | Friday 29 August 2025 18:11:40 +0000 (0:00:02.272) 0:00:43.452 ********* 2025-08-29 18:12:53.397414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.397426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.397437 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:53.397461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.397473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.397484 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:53.397495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.397514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.397525 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:53.397536 | orchestrator | 2025-08-29 18:12:53.397546 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 18:12:53.397557 | orchestrator | Friday 29 August 2025 18:11:41 +0000 (0:00:01.006) 0:00:44.458 ********* 2025-08-29 18:12:53.397602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.397627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.397638 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:53.397649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.397667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.397679 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:53.397690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.397708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.397719 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:53.397730 | orchestrator | 2025-08-29 18:12:53.397740 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 18:12:53.397751 | orchestrator | Friday 29 August 2025 18:11:44 +0000 (0:00:03.491) 0:00:47.949 ********* 2025-08-29 18:12:53.397767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397858 | orchestrator | 2025-08-29 18:12:53.397869 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 18:12:53.397880 | orchestrator | Friday 29 August 2025 18:11:47 +0000 (0:00:03.179) 0:00:51.129 ********* 2025-08-29 18:12:53.397891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.397944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.397977 | orchestrator | 2025-08-29 18:12:53.397988 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 18:12:53.398004 | orchestrator | Friday 29 August 2025 18:11:52 +0000 (0:00:04.903) 0:00:56.033 ********* 2025-08-29 18:12:53.398063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.398090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.398101 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:53.398118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.398129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.398140 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:53.398160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 18:12:53.398183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 18:12:53.398194 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:53.398205 | orchestrator | 2025-08-29 18:12:53.398216 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 18:12:53.398227 | orchestrator | Friday 29 August 2025 18:11:53 +0000 (0:00:00.797) 0:00:56.830 ********* 2025-08-29 18:12:53.398242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.398254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.398265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 18:12:53.398289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.398301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.398317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 18:12:53.398328 | orchestrator | 2025-08-29 18:12:53.398339 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 18:12:53.398350 | orchestrator | Friday 29 August 2025 18:11:55 +0000 (0:00:02.098) 0:00:58.929 ********* 2025-08-29 18:12:53.398361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:12:53.398371 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:12:53.398382 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:12:53.398392 | orchestrator | 2025-08-29 18:12:53.398403 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 18:12:53.398413 | orchestrator | Friday 29 August 2025 18:11:56 +0000 (0:00:00.298) 0:00:59.227 ********* 2025-08-29 18:12:53.398424 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.398434 | orchestrator | 2025-08-29 18:12:53.398445 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 18:12:53.398456 | orchestrator | Friday 29 August 2025 18:11:58 +0000 (0:00:01.986) 0:01:01.214 ********* 2025-08-29 18:12:53.398466 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.398477 | orchestrator | 2025-08-29 18:12:53.398487 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 18:12:53.398497 | orchestrator | Friday 29 August 2025 18:12:00 +0000 (0:00:02.058) 0:01:03.272 ********* 2025-08-29 18:12:53.398508 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.398518 | orchestrator | 2025-08-29 18:12:53.398529 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 18:12:53.398539 | orchestrator | Friday 29 August 2025 18:12:18 +0000 (0:00:18.045) 0:01:21.317 ********* 2025-08-29 18:12:53.398556 | orchestrator | 2025-08-29 18:12:53.398621 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 18:12:53.398632 | orchestrator | Friday 29 August 2025 18:12:18 +0000 (0:00:00.058) 0:01:21.376 ********* 2025-08-29 18:12:53.398643 | orchestrator | 2025-08-29 18:12:53.398653 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 18:12:53.398664 | orchestrator | Friday 29 August 2025 18:12:18 +0000 (0:00:00.057) 0:01:21.433 ********* 2025-08-29 18:12:53.398674 | orchestrator | 2025-08-29 18:12:53.398685 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 18:12:53.398695 | orchestrator | Friday 29 August 2025 18:12:18 +0000 (0:00:00.059) 0:01:21.493 ********* 2025-08-29 18:12:53.398706 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.398717 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:53.398727 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:53.398738 | orchestrator | 2025-08-29 18:12:53.398749 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 18:12:53.398759 | orchestrator | Friday 29 August 2025 18:12:40 +0000 (0:00:22.503) 0:01:43.996 ********* 2025-08-29 18:12:53.398770 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:12:53.398780 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:12:53.398791 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:12:53.398802 | orchestrator | 2025-08-29 18:12:53.398820 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:12:53.398833 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 18:12:53.398844 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:12:53.398855 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:12:53.398866 | orchestrator | 2025-08-29 18:12:53.398877 | orchestrator | 2025-08-29 18:12:53.398887 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:12:53.398898 | orchestrator | Friday 29 August 2025 18:12:52 +0000 (0:00:11.642) 0:01:55.638 ********* 2025-08-29 18:12:53.398908 | orchestrator | =============================================================================== 2025-08-29 18:12:53.398919 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.50s 2025-08-29 18:12:53.398930 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.05s 2025-08-29 18:12:53.398940 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.64s 2025-08-29 18:12:53.398951 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.20s 2025-08-29 18:12:53.398962 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.90s 2025-08-29 18:12:53.398972 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.80s 2025-08-29 18:12:53.398982 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.74s 2025-08-29 18:12:53.398991 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.61s 2025-08-29 18:12:53.399001 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.49s 2025-08-29 18:12:53.399010 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.46s 2025-08-29 18:12:53.399019 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.30s 2025-08-29 18:12:53.399029 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.24s 2025-08-29 18:12:53.399038 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.18s 2025-08-29 18:12:53.399048 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.02s 2025-08-29 18:12:53.399064 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.97s 2025-08-29 18:12:53.399078 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.43s 2025-08-29 18:12:53.399088 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.27s 2025-08-29 18:12:53.399098 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.10s 2025-08-29 18:12:53.399107 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.06s 2025-08-29 18:12:53.399116 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.99s 2025-08-29 18:12:53.399126 | orchestrator | 2025-08-29 18:12:53 | INFO  | Task 37ae7a02-9b1a-4bb8-a7e8-71b2269ca727 is in state SUCCESS 2025-08-29 18:12:53.399136 | orchestrator | 2025-08-29 18:12:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:56.439481 | orchestrator | 2025-08-29 18:12:56 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:56.441716 | orchestrator | 2025-08-29 18:12:56 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:56.441994 | orchestrator | 2025-08-29 18:12:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:12:59.481087 | orchestrator | 2025-08-29 18:12:59 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:12:59.481190 | orchestrator | 2025-08-29 18:12:59 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:12:59.481205 | orchestrator | 2025-08-29 18:12:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:02.525418 | orchestrator | 2025-08-29 18:13:02 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:13:02.527674 | orchestrator | 2025-08-29 18:13:02 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:02.527710 | orchestrator | 2025-08-29 18:13:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:05.580206 | orchestrator | 2025-08-29 18:13:05 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:13:05.582711 | orchestrator | 2025-08-29 18:13:05 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:05.582738 | orchestrator | 2025-08-29 18:13:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:08.633149 | orchestrator | 2025-08-29 18:13:08 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state STARTED 2025-08-29 18:13:08.635173 | orchestrator | 2025-08-29 18:13:08 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:08.635454 | orchestrator | 2025-08-29 18:13:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:11.679261 | orchestrator | 2025-08-29 18:13:11 | INFO  | Task 84bd52d0-c9df-49fa-84ec-095ba7aa2296 is in state SUCCESS 2025-08-29 18:13:11.681292 | orchestrator | 2025-08-29 18:13:11.681337 | orchestrator | 2025-08-29 18:13:11.681352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:13:11.681364 | orchestrator | 2025-08-29 18:13:11.681376 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:13:11.681388 | orchestrator | Friday 29 August 2025 18:10:59 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-08-29 18:13:11.681399 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:13:11.681412 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:13:11.681424 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:13:11.681435 | orchestrator | 2025-08-29 18:13:11.681447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:13:11.681917 | orchestrator | Friday 29 August 2025 18:10:59 +0000 (0:00:00.331) 0:00:00.609 ********* 2025-08-29 18:13:11.681929 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 18:13:11.681967 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 18:13:11.681979 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 18:13:11.681990 | orchestrator | 2025-08-29 18:13:11.682001 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 18:13:11.682011 | orchestrator | 2025-08-29 18:13:11.682466 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 18:13:11.682479 | orchestrator | Friday 29 August 2025 18:11:00 +0000 (0:00:00.438) 0:00:01.048 ********* 2025-08-29 18:13:11.682490 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:13:11.682501 | orchestrator | 2025-08-29 18:13:11.682512 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 18:13:11.682522 | orchestrator | Friday 29 August 2025 18:11:00 +0000 (0:00:00.582) 0:00:01.630 ********* 2025-08-29 18:13:11.682552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.682567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.682579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.682590 | orchestrator | 2025-08-29 18:13:11.682634 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 18:13:11.682652 | orchestrator | Friday 29 August 2025 18:11:01 +0000 (0:00:00.716) 0:00:02.347 ********* 2025-08-29 18:13:11.682670 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 18:13:11.682689 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 18:13:11.682709 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:13:11.682727 | orchestrator | 2025-08-29 18:13:11.682742 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 18:13:11.682752 | orchestrator | Friday 29 August 2025 18:11:02 +0000 (0:00:00.922) 0:00:03.269 ********* 2025-08-29 18:13:11.682763 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:13:11.682774 | orchestrator | 2025-08-29 18:13:11.682798 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 18:13:11.682809 | orchestrator | Friday 29 August 2025 18:11:03 +0000 (0:00:00.738) 0:00:04.008 ********* 2025-08-29 18:13:11.682877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.682891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.682909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.682921 | orchestrator | 2025-08-29 18:13:11.682932 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 18:13:11.682943 | orchestrator | Friday 29 August 2025 18:11:04 +0000 (0:00:01.363) 0:00:05.372 ********* 2025-08-29 18:13:11.682954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 18:13:11.682965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 18:13:11.682977 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.682996 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.683042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 18:13:11.683056 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.683066 | orchestrator | 2025-08-29 18:13:11.683077 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 18:13:11.683089 | orchestrator | Friday 29 August 2025 18:11:04 +0000 (0:00:00.366) 0:00:05.738 ********* 2025-08-29 18:13:11.683101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 18:13:11.683120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 18:13:11.683132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.683144 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.683157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 18:13:11.683169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.683181 | orchestrator | 2025-08-29 18:13:11.683193 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 18:13:11.683205 | orchestrator | Friday 29 August 2025 18:11:05 +0000 (0:00:00.836) 0:00:06.574 ********* 2025-08-29 18:13:11.683217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.683267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.683282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.683294 | orchestrator | 2025-08-29 18:13:11.683306 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 18:13:11.683318 | orchestrator | Friday 29 August 2025 18:11:06 +0000 (0:00:01.226) 0:00:07.801 ********* 2025-08-29 18:13:11.683330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.683348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.683361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.683380 | orchestrator | 2025-08-29 18:13:11.683392 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 18:13:11.683404 | orchestrator | Friday 29 August 2025 18:11:08 +0000 (0:00:01.353) 0:00:09.154 ********* 2025-08-29 18:13:11.683416 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.683428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.683440 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.683451 | orchestrator | 2025-08-29 18:13:11.683462 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 18:13:11.683473 | orchestrator | Friday 29 August 2025 18:11:08 +0000 (0:00:00.520) 0:00:09.675 ********* 2025-08-29 18:13:11.683483 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 18:13:11.683494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 18:13:11.683505 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 18:13:11.683515 | orchestrator | 2025-08-29 18:13:11.683526 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 18:13:11.683537 | orchestrator | Friday 29 August 2025 18:11:09 +0000 (0:00:01.231) 0:00:10.907 ********* 2025-08-29 18:13:11.683547 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 18:13:11.683589 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 18:13:11.683627 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 18:13:11.683638 | orchestrator | 2025-08-29 18:13:11.683649 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 18:13:11.683659 | orchestrator | Friday 29 August 2025 18:11:11 +0000 (0:00:01.343) 0:00:12.250 ********* 2025-08-29 18:13:11.683670 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 18:13:11.683681 | orchestrator | 2025-08-29 18:13:11.683691 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 18:13:11.683702 | orchestrator | Friday 29 August 2025 18:11:12 +0000 (0:00:00.759) 0:00:13.009 ********* 2025-08-29 18:13:11.683712 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 18:13:11.683723 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 18:13:11.683733 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:13:11.683744 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:13:11.683755 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:13:11.683765 | orchestrator | 2025-08-29 18:13:11.683776 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 18:13:11.683786 | orchestrator | Friday 29 August 2025 18:11:12 +0000 (0:00:00.704) 0:00:13.714 ********* 2025-08-29 18:13:11.683797 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.683807 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.683818 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.683828 | orchestrator | 2025-08-29 18:13:11.683839 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 18:13:11.683849 | orchestrator | Friday 29 August 2025 18:11:13 +0000 (0:00:00.512) 0:00:14.227 ********* 2025-08-29 18:13:11.683866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096358, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8903444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.683885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096358, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8903444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.683896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1096358, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8903444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.683908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096419, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9029305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.683954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096419, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9029305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.683968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096419, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9029305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.683979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096370, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8921735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096370, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8921735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1096370, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8921735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096422, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9051871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096422, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9051871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096422, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9051871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096393, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8961709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096393, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8961709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1096393, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8961709, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096409, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9001188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096409, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9001188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096409, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9001188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096357, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8889542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096357, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8889542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1096357, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8889542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096362, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8908176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096362, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8908176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1096362, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8908176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096371, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8927333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096371, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8927333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1096371, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8927333, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096399, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8973684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096399, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8973684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1096399, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8973684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096414, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9020128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096414, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9020128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096414, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9020128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096364, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.891882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096364, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.891882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1096364, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.891882, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096406, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9001188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096406, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9001188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096406, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9001188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096395, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8966742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096395, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8966742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1096395, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8966742, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096386, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8958762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096386, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8958762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1096386, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8958762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096380, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8946922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096380, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8946922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1096380, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8946922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096402, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.898983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096402, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.898983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1096402, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.898983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096374, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8935769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096374, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8935769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1096374, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.8935769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096412, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.901029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096412, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.901029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096412, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.901029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096621, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9659097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096621, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9659097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096621, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9659097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096457, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9196644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096457, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9196644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096457, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9196644, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096439, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9104288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096439, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9104288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096439, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9104288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096485, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9257832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.684990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096485, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9257832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096485, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9257832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096430, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.906029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096430, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.906029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096430, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.906029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096557, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.954204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096557, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.954204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096557, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.954204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096493, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9340847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096493, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9340847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096493, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9340847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096560, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.954204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096560, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.954204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096560, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.954204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096589, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9601166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096589, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9601166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096589, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9601166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096554, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9530265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096554, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9530265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096554, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9530265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096473, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9231484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096473, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9231484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096473, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9231484, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096451, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9160292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096451, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9160292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096451, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9160292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096470, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9200292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096470, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9200292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096470, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9200292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096441, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9141889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096441, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9141889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096441, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9141889, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096477, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9241173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096477, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9241173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096477, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9241173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096573, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9591193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096573, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9591193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096564, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096573, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9591193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096564, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096432, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096564, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9560297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096432, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096434, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.909029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096432, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9070292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096434, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.909029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096517, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9525414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096434, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.909029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096517, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9525414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096562, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9550295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096517, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9525414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096562, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9550295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096562, 'dev': 112, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1756487906.9550295, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 18:13:11.685788 | orchestrator | 2025-08-29 18:13:11.685799 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 18:13:11.685810 | orchestrator | Friday 29 August 2025 18:11:51 +0000 (0:00:38.040) 0:00:52.267 ********* 2025-08-29 18:13:11.685826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.685847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.685859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 18:13:11.685870 | orchestrator | 2025-08-29 18:13:11.685881 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 18:13:11.685891 | orchestrator | Friday 29 August 2025 18:11:52 +0000 (0:00:01.141) 0:00:53.408 ********* 2025-08-29 18:13:11.685902 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:13:11.685913 | orchestrator | 2025-08-29 18:13:11.685924 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 18:13:11.685940 | orchestrator | Friday 29 August 2025 18:11:54 +0000 (0:00:02.274) 0:00:55.683 ********* 2025-08-29 18:13:11.685951 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:13:11.685961 | orchestrator | 2025-08-29 18:13:11.685972 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 18:13:11.685983 | orchestrator | Friday 29 August 2025 18:11:56 +0000 (0:00:02.081) 0:00:57.765 ********* 2025-08-29 18:13:11.685994 | orchestrator | 2025-08-29 18:13:11.686004 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 18:13:11.686065 | orchestrator | Friday 29 August 2025 18:11:56 +0000 (0:00:00.232) 0:00:57.997 ********* 2025-08-29 18:13:11.686079 | orchestrator | 2025-08-29 18:13:11.686089 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 18:13:11.686100 | orchestrator | Friday 29 August 2025 18:11:57 +0000 (0:00:00.080) 0:00:58.078 ********* 2025-08-29 18:13:11.686110 | orchestrator | 2025-08-29 18:13:11.686121 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 18:13:11.686131 | orchestrator | Friday 29 August 2025 18:11:57 +0000 (0:00:00.084) 0:00:58.163 ********* 2025-08-29 18:13:11.686142 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.686152 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.686163 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:13:11.686174 | orchestrator | 2025-08-29 18:13:11.686184 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 18:13:11.686195 | orchestrator | Friday 29 August 2025 18:11:58 +0000 (0:00:01.766) 0:00:59.929 ********* 2025-08-29 18:13:11.686205 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.686216 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.686227 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 18:13:11.686238 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 18:13:11.686256 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 18:13:11.686267 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:13:11.686278 | orchestrator | 2025-08-29 18:13:11.686288 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 18:13:11.686299 | orchestrator | Friday 29 August 2025 18:12:36 +0000 (0:00:37.724) 0:01:37.654 ********* 2025-08-29 18:13:11.686310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.686320 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:13:11.686331 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:13:11.686341 | orchestrator | 2025-08-29 18:13:11.686357 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 18:13:11.686368 | orchestrator | Friday 29 August 2025 18:13:05 +0000 (0:00:28.549) 0:02:06.203 ********* 2025-08-29 18:13:11.686379 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:13:11.686389 | orchestrator | 2025-08-29 18:13:11.686400 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 18:13:11.686411 | orchestrator | Friday 29 August 2025 18:13:07 +0000 (0:00:02.009) 0:02:08.213 ********* 2025-08-29 18:13:11.686421 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.686432 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:13:11.686442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:13:11.686453 | orchestrator | 2025-08-29 18:13:11.686463 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 18:13:11.686474 | orchestrator | Friday 29 August 2025 18:13:07 +0000 (0:00:00.493) 0:02:08.706 ********* 2025-08-29 18:13:11.686485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 18:13:11.686498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 18:13:11.686510 | orchestrator | 2025-08-29 18:13:11.686520 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 18:13:11.686531 | orchestrator | Friday 29 August 2025 18:13:09 +0000 (0:00:02.159) 0:02:10.865 ********* 2025-08-29 18:13:11.686541 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:13:11.686552 | orchestrator | 2025-08-29 18:13:11.686562 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:13:11.686574 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 18:13:11.686585 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 18:13:11.686618 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 18:13:11.686630 | orchestrator | 2025-08-29 18:13:11.686641 | orchestrator | 2025-08-29 18:13:11.686651 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:13:11.686662 | orchestrator | Friday 29 August 2025 18:13:10 +0000 (0:00:00.262) 0:02:11.127 ********* 2025-08-29 18:13:11.686673 | orchestrator | =============================================================================== 2025-08-29 18:13:11.686690 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.04s 2025-08-29 18:13:11.686701 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.72s 2025-08-29 18:13:11.686711 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.55s 2025-08-29 18:13:11.686729 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.27s 2025-08-29 18:13:11.686739 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.16s 2025-08-29 18:13:11.686750 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.08s 2025-08-29 18:13:11.686761 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.01s 2025-08-29 18:13:11.686771 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.77s 2025-08-29 18:13:11.686781 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.36s 2025-08-29 18:13:11.686792 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.35s 2025-08-29 18:13:11.686802 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.34s 2025-08-29 18:13:11.686813 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.23s 2025-08-29 18:13:11.686823 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.23s 2025-08-29 18:13:11.686833 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.14s 2025-08-29 18:13:11.686844 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.92s 2025-08-29 18:13:11.686854 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.84s 2025-08-29 18:13:11.686865 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2025-08-29 18:13:11.686875 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2025-08-29 18:13:11.686886 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.72s 2025-08-29 18:13:11.686896 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.70s 2025-08-29 18:13:11.686906 | orchestrator | 2025-08-29 18:13:11 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:11.686922 | orchestrator | 2025-08-29 18:13:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:14.723206 | orchestrator | 2025-08-29 18:13:14 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:14.723306 | orchestrator | 2025-08-29 18:13:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:17.774202 | orchestrator | 2025-08-29 18:13:17 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:17.774299 | orchestrator | 2025-08-29 18:13:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:20.815827 | orchestrator | 2025-08-29 18:13:20 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:20.815910 | orchestrator | 2025-08-29 18:13:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:23.859934 | orchestrator | 2025-08-29 18:13:23 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:23.860034 | orchestrator | 2025-08-29 18:13:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:26.900763 | orchestrator | 2025-08-29 18:13:26 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:26.900879 | orchestrator | 2025-08-29 18:13:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:29.942997 | orchestrator | 2025-08-29 18:13:29 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:29.943100 | orchestrator | 2025-08-29 18:13:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:32.984703 | orchestrator | 2025-08-29 18:13:32 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:32.984815 | orchestrator | 2025-08-29 18:13:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:36.034536 | orchestrator | 2025-08-29 18:13:36 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:36.034712 | orchestrator | 2025-08-29 18:13:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:39.076908 | orchestrator | 2025-08-29 18:13:39 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:39.077024 | orchestrator | 2025-08-29 18:13:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:42.108999 | orchestrator | 2025-08-29 18:13:42 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:42.109103 | orchestrator | 2025-08-29 18:13:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:45.149112 | orchestrator | 2025-08-29 18:13:45 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:45.149216 | orchestrator | 2025-08-29 18:13:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:48.186116 | orchestrator | 2025-08-29 18:13:48 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:48.186217 | orchestrator | 2025-08-29 18:13:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:51.221944 | orchestrator | 2025-08-29 18:13:51 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:51.222106 | orchestrator | 2025-08-29 18:13:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:54.255175 | orchestrator | 2025-08-29 18:13:54 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:54.255279 | orchestrator | 2025-08-29 18:13:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:13:57.299307 | orchestrator | 2025-08-29 18:13:57 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:13:57.299437 | orchestrator | 2025-08-29 18:13:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:00.353784 | orchestrator | 2025-08-29 18:14:00 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:00.354014 | orchestrator | 2025-08-29 18:14:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:03.407534 | orchestrator | 2025-08-29 18:14:03 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:03.407872 | orchestrator | 2025-08-29 18:14:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:06.446309 | orchestrator | 2025-08-29 18:14:06 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:06.446410 | orchestrator | 2025-08-29 18:14:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:09.486830 | orchestrator | 2025-08-29 18:14:09 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:09.486958 | orchestrator | 2025-08-29 18:14:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:12.532115 | orchestrator | 2025-08-29 18:14:12 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:12.532229 | orchestrator | 2025-08-29 18:14:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:15.575388 | orchestrator | 2025-08-29 18:14:15 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:15.575490 | orchestrator | 2025-08-29 18:14:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:18.623786 | orchestrator | 2025-08-29 18:14:18 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:18.623885 | orchestrator | 2025-08-29 18:14:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:21.667057 | orchestrator | 2025-08-29 18:14:21 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:21.667190 | orchestrator | 2025-08-29 18:14:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:24.717374 | orchestrator | 2025-08-29 18:14:24 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:24.717475 | orchestrator | 2025-08-29 18:14:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:27.752409 | orchestrator | 2025-08-29 18:14:27 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:27.752506 | orchestrator | 2025-08-29 18:14:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:30.796190 | orchestrator | 2025-08-29 18:14:30 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:30.796282 | orchestrator | 2025-08-29 18:14:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:33.840338 | orchestrator | 2025-08-29 18:14:33 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:33.840428 | orchestrator | 2025-08-29 18:14:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:36.881835 | orchestrator | 2025-08-29 18:14:36 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:36.881934 | orchestrator | 2025-08-29 18:14:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:39.927091 | orchestrator | 2025-08-29 18:14:39 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:39.927194 | orchestrator | 2025-08-29 18:14:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:42.972206 | orchestrator | 2025-08-29 18:14:42 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:42.972310 | orchestrator | 2025-08-29 18:14:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:46.023869 | orchestrator | 2025-08-29 18:14:46 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:46.023982 | orchestrator | 2025-08-29 18:14:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:49.071919 | orchestrator | 2025-08-29 18:14:49 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:49.072009 | orchestrator | 2025-08-29 18:14:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:52.119113 | orchestrator | 2025-08-29 18:14:52 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:52.119220 | orchestrator | 2025-08-29 18:14:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:55.161133 | orchestrator | 2025-08-29 18:14:55 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:55.161223 | orchestrator | 2025-08-29 18:14:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:14:58.204880 | orchestrator | 2025-08-29 18:14:58 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:14:58.204991 | orchestrator | 2025-08-29 18:14:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:01.251174 | orchestrator | 2025-08-29 18:15:01 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:01.251279 | orchestrator | 2025-08-29 18:15:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:04.295137 | orchestrator | 2025-08-29 18:15:04 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:04.295238 | orchestrator | 2025-08-29 18:15:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:07.343881 | orchestrator | 2025-08-29 18:15:07 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:07.343978 | orchestrator | 2025-08-29 18:15:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:10.388845 | orchestrator | 2025-08-29 18:15:10 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:10.388957 | orchestrator | 2025-08-29 18:15:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:13.436297 | orchestrator | 2025-08-29 18:15:13 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:13.436384 | orchestrator | 2025-08-29 18:15:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:16.483965 | orchestrator | 2025-08-29 18:15:16 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:16.484084 | orchestrator | 2025-08-29 18:15:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:19.528052 | orchestrator | 2025-08-29 18:15:19 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:19.528158 | orchestrator | 2025-08-29 18:15:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:22.571919 | orchestrator | 2025-08-29 18:15:22 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:22.572029 | orchestrator | 2025-08-29 18:15:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:25.619115 | orchestrator | 2025-08-29 18:15:25 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:25.619217 | orchestrator | 2025-08-29 18:15:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:28.662724 | orchestrator | 2025-08-29 18:15:28 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:28.662898 | orchestrator | 2025-08-29 18:15:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:31.695941 | orchestrator | 2025-08-29 18:15:31 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:31.696040 | orchestrator | 2025-08-29 18:15:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:34.741458 | orchestrator | 2025-08-29 18:15:34 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:34.741567 | orchestrator | 2025-08-29 18:15:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:37.783261 | orchestrator | 2025-08-29 18:15:37 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state STARTED 2025-08-29 18:15:37.783373 | orchestrator | 2025-08-29 18:15:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 18:15:40.824132 | orchestrator | 2025-08-29 18:15:40 | INFO  | Task 4a96f723-062c-48fc-be79-e45ed873d470 is in state SUCCESS 2025-08-29 18:15:40.826343 | orchestrator | 2025-08-29 18:15:40.827093 | orchestrator | 2025-08-29 18:15:40.827115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:15:40.827127 | orchestrator | 2025-08-29 18:15:40.827139 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:15:40.827150 | orchestrator | Friday 29 August 2025 18:11:08 +0000 (0:00:00.265) 0:00:00.265 ********* 2025-08-29 18:15:40.827162 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.827174 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:15:40.827184 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:15:40.827195 | orchestrator | 2025-08-29 18:15:40.827206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:15:40.827217 | orchestrator | Friday 29 August 2025 18:11:08 +0000 (0:00:00.326) 0:00:00.591 ********* 2025-08-29 18:15:40.827229 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 18:15:40.827269 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 18:15:40.827280 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 18:15:40.827292 | orchestrator | 2025-08-29 18:15:40.827302 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 18:15:40.827313 | orchestrator | 2025-08-29 18:15:40.827324 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 18:15:40.827335 | orchestrator | Friday 29 August 2025 18:11:09 +0000 (0:00:00.455) 0:00:01.047 ********* 2025-08-29 18:15:40.827346 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:15:40.827357 | orchestrator | 2025-08-29 18:15:40.827368 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 18:15:40.827379 | orchestrator | Friday 29 August 2025 18:11:09 +0000 (0:00:00.576) 0:00:01.623 ********* 2025-08-29 18:15:40.827390 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 18:15:40.827401 | orchestrator | 2025-08-29 18:15:40.827411 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 18:15:40.827422 | orchestrator | Friday 29 August 2025 18:11:13 +0000 (0:00:03.299) 0:00:04.922 ********* 2025-08-29 18:15:40.827433 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 18:15:40.827444 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 18:15:40.827454 | orchestrator | 2025-08-29 18:15:40.827465 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 18:15:40.827490 | orchestrator | Friday 29 August 2025 18:11:19 +0000 (0:00:06.208) 0:00:11.131 ********* 2025-08-29 18:15:40.827501 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 18:15:40.827512 | orchestrator | 2025-08-29 18:15:40.827522 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 18:15:40.827533 | orchestrator | Friday 29 August 2025 18:11:22 +0000 (0:00:03.197) 0:00:14.329 ********* 2025-08-29 18:15:40.827544 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 18:15:40.827555 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 18:15:40.827566 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 18:15:40.827577 | orchestrator | 2025-08-29 18:15:40.827587 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 18:15:40.827598 | orchestrator | Friday 29 August 2025 18:11:30 +0000 (0:00:07.842) 0:00:22.172 ********* 2025-08-29 18:15:40.827609 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 18:15:40.827620 | orchestrator | 2025-08-29 18:15:40.827631 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 18:15:40.827642 | orchestrator | Friday 29 August 2025 18:11:33 +0000 (0:00:03.364) 0:00:25.536 ********* 2025-08-29 18:15:40.827653 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 18:15:40.827663 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 18:15:40.827690 | orchestrator | 2025-08-29 18:15:40.827712 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 18:15:40.827723 | orchestrator | Friday 29 August 2025 18:11:40 +0000 (0:00:07.081) 0:00:32.618 ********* 2025-08-29 18:15:40.827733 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 18:15:40.827744 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 18:15:40.827755 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 18:15:40.827768 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 18:15:40.827787 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 18:15:40.827804 | orchestrator | 2025-08-29 18:15:40.827861 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 18:15:40.827895 | orchestrator | Friday 29 August 2025 18:11:55 +0000 (0:00:15.030) 0:00:47.649 ********* 2025-08-29 18:15:40.827911 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:15:40.827928 | orchestrator | 2025-08-29 18:15:40.827946 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 18:15:40.827963 | orchestrator | Friday 29 August 2025 18:11:56 +0000 (0:00:00.544) 0:00:48.194 ********* 2025-08-29 18:15:40.827981 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828000 | orchestrator | 2025-08-29 18:15:40.828018 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-08-29 18:15:40.828034 | orchestrator | Friday 29 August 2025 18:12:00 +0000 (0:00:04.435) 0:00:52.629 ********* 2025-08-29 18:15:40.828045 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828055 | orchestrator | 2025-08-29 18:15:40.828066 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 18:15:40.828130 | orchestrator | Friday 29 August 2025 18:12:05 +0000 (0:00:04.368) 0:00:56.998 ********* 2025-08-29 18:15:40.828143 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.828154 | orchestrator | 2025-08-29 18:15:40.828164 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-08-29 18:15:40.828175 | orchestrator | Friday 29 August 2025 18:12:08 +0000 (0:00:03.118) 0:01:00.117 ********* 2025-08-29 18:15:40.828187 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 18:15:40.828198 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 18:15:40.828209 | orchestrator | 2025-08-29 18:15:40.828220 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-08-29 18:15:40.828230 | orchestrator | Friday 29 August 2025 18:12:18 +0000 (0:00:10.394) 0:01:10.511 ********* 2025-08-29 18:15:40.828241 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-08-29 18:15:40.828252 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-08-29 18:15:40.828265 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-08-29 18:15:40.828277 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-08-29 18:15:40.828288 | orchestrator | 2025-08-29 18:15:40.828299 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-08-29 18:15:40.828309 | orchestrator | Friday 29 August 2025 18:12:34 +0000 (0:00:15.439) 0:01:25.951 ********* 2025-08-29 18:15:40.828320 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828330 | orchestrator | 2025-08-29 18:15:40.828341 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-08-29 18:15:40.828351 | orchestrator | Friday 29 August 2025 18:12:38 +0000 (0:00:04.422) 0:01:30.373 ********* 2025-08-29 18:15:40.828361 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828372 | orchestrator | 2025-08-29 18:15:40.828382 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-08-29 18:15:40.828393 | orchestrator | Friday 29 August 2025 18:12:43 +0000 (0:00:05.085) 0:01:35.459 ********* 2025-08-29 18:15:40.828403 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.828413 | orchestrator | 2025-08-29 18:15:40.828424 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-08-29 18:15:40.828443 | orchestrator | Friday 29 August 2025 18:12:43 +0000 (0:00:00.209) 0:01:35.668 ********* 2025-08-29 18:15:40.828454 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828464 | orchestrator | 2025-08-29 18:15:40.828474 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 18:15:40.828485 | orchestrator | Friday 29 August 2025 18:12:48 +0000 (0:00:04.482) 0:01:40.151 ********* 2025-08-29 18:15:40.828504 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:15:40.828515 | orchestrator | 2025-08-29 18:15:40.828526 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-08-29 18:15:40.828536 | orchestrator | Friday 29 August 2025 18:12:49 +0000 (0:00:01.013) 0:01:41.165 ********* 2025-08-29 18:15:40.828546 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.828557 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828568 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.828578 | orchestrator | 2025-08-29 18:15:40.828588 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-08-29 18:15:40.828599 | orchestrator | Friday 29 August 2025 18:12:54 +0000 (0:00:05.411) 0:01:46.577 ********* 2025-08-29 18:15:40.828609 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.828620 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828630 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.828641 | orchestrator | 2025-08-29 18:15:40.828651 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-08-29 18:15:40.828662 | orchestrator | Friday 29 August 2025 18:12:59 +0000 (0:00:04.455) 0:01:51.032 ********* 2025-08-29 18:15:40.828672 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828682 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.828693 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.828703 | orchestrator | 2025-08-29 18:15:40.828714 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-08-29 18:15:40.828724 | orchestrator | Friday 29 August 2025 18:13:00 +0000 (0:00:00.735) 0:01:51.767 ********* 2025-08-29 18:15:40.828735 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.828745 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:15:40.828756 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:15:40.828766 | orchestrator | 2025-08-29 18:15:40.828777 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-08-29 18:15:40.828787 | orchestrator | Friday 29 August 2025 18:13:01 +0000 (0:00:01.824) 0:01:53.592 ********* 2025-08-29 18:15:40.828798 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.828808 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828819 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.828901 | orchestrator | 2025-08-29 18:15:40.828912 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-08-29 18:15:40.828923 | orchestrator | Friday 29 August 2025 18:13:03 +0000 (0:00:01.306) 0:01:54.899 ********* 2025-08-29 18:15:40.828934 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.828944 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.828954 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.828965 | orchestrator | 2025-08-29 18:15:40.828975 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-08-29 18:15:40.828986 | orchestrator | Friday 29 August 2025 18:13:04 +0000 (0:00:01.173) 0:01:56.072 ********* 2025-08-29 18:15:40.828996 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.829007 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.829018 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.829028 | orchestrator | 2025-08-29 18:15:40.829072 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-08-29 18:15:40.829085 | orchestrator | Friday 29 August 2025 18:13:06 +0000 (0:00:01.994) 0:01:58.067 ********* 2025-08-29 18:15:40.829095 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.829106 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.829116 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.829127 | orchestrator | 2025-08-29 18:15:40.829137 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-08-29 18:15:40.829148 | orchestrator | Friday 29 August 2025 18:13:08 +0000 (0:00:01.767) 0:01:59.834 ********* 2025-08-29 18:15:40.829158 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.829169 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:15:40.829187 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:15:40.829198 | orchestrator | 2025-08-29 18:15:40.829209 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-08-29 18:15:40.829219 | orchestrator | Friday 29 August 2025 18:13:08 +0000 (0:00:00.647) 0:02:00.482 ********* 2025-08-29 18:15:40.829230 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.829240 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:15:40.829251 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:15:40.829261 | orchestrator | 2025-08-29 18:15:40.829272 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 18:15:40.829282 | orchestrator | Friday 29 August 2025 18:13:12 +0000 (0:00:03.771) 0:02:04.254 ********* 2025-08-29 18:15:40.829293 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:15:40.829304 | orchestrator | 2025-08-29 18:15:40.829314 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-08-29 18:15:40.829324 | orchestrator | Friday 29 August 2025 18:13:13 +0000 (0:00:00.701) 0:02:04.955 ********* 2025-08-29 18:15:40.829335 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.829345 | orchestrator | 2025-08-29 18:15:40.829356 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 18:15:40.829366 | orchestrator | Friday 29 August 2025 18:13:17 +0000 (0:00:03.855) 0:02:08.811 ********* 2025-08-29 18:15:40.829376 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.829387 | orchestrator | 2025-08-29 18:15:40.829397 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-08-29 18:15:40.829407 | orchestrator | Friday 29 August 2025 18:13:20 +0000 (0:00:03.029) 0:02:11.841 ********* 2025-08-29 18:15:40.829417 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 18:15:40.829426 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 18:15:40.829436 | orchestrator | 2025-08-29 18:15:40.829450 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-08-29 18:15:40.829460 | orchestrator | Friday 29 August 2025 18:13:26 +0000 (0:00:06.653) 0:02:18.494 ********* 2025-08-29 18:15:40.829469 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.829478 | orchestrator | 2025-08-29 18:15:40.829488 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-08-29 18:15:40.829497 | orchestrator | Friday 29 August 2025 18:13:29 +0000 (0:00:03.166) 0:02:21.661 ********* 2025-08-29 18:15:40.829506 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:15:40.829516 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:15:40.829525 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:15:40.829534 | orchestrator | 2025-08-29 18:15:40.829544 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-08-29 18:15:40.829553 | orchestrator | Friday 29 August 2025 18:13:30 +0000 (0:00:00.331) 0:02:21.992 ********* 2025-08-29 18:15:40.829565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.829604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.829624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.829635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.829652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.829669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.829687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.829944 | orchestrator | 2025-08-29 18:15:40.829954 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-08-29 18:15:40.829963 | orchestrator | Friday 29 August 2025 18:13:32 +0000 (0:00:02.374) 0:02:24.367 ********* 2025-08-29 18:15:40.829973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.829983 | orchestrator | 2025-08-29 18:15:40.829993 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-08-29 18:15:40.830002 | orchestrator | Friday 29 August 2025 18:13:32 +0000 (0:00:00.159) 0:02:24.527 ********* 2025-08-29 18:15:40.830011 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.830052 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:15:40.830061 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:15:40.830071 | orchestrator | 2025-08-29 18:15:40.830080 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-08-29 18:15:40.830089 | orchestrator | Friday 29 August 2025 18:13:33 +0000 (0:00:00.508) 0:02:25.036 ********* 2025-08-29 18:15:40.830100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.830116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.830126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.830163 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.830199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.830211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.830231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.830267 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:15:40.830301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.830312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.830322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.830363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:15:40.830372 | orchestrator | 2025-08-29 18:15:40.830382 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 18:15:40.830391 | orchestrator | Friday 29 August 2025 18:13:34 +0000 (0:00:00.677) 0:02:25.713 ********* 2025-08-29 18:15:40.830401 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:15:40.830411 | orchestrator | 2025-08-29 18:15:40.830420 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-08-29 18:15:40.830430 | orchestrator | Friday 29 August 2025 18:13:34 +0000 (0:00:00.550) 0:02:26.264 ********* 2025-08-29 18:15:40.830440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.830474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.830490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.830500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.830516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.830526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.830536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.830650 | orchestrator | 2025-08-29 18:15:40.830659 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-08-29 18:15:40.830669 | orchestrator | Friday 29 August 2025 18:13:39 +0000 (0:00:05.303) 0:02:31.567 ********* 2025-08-29 18:15:40.830683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.830699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.830709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.830747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.830757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.830767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.830786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.830817 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:15:40.830892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.830906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.830916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.830948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.830958 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:15:40.830967 | orchestrator | 2025-08-29 18:15:40.830977 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-08-29 18:15:40.830986 | orchestrator | Friday 29 August 2025 18:13:40 +0000 (0:00:00.694) 0:02:32.262 ********* 2025-08-29 18:15:40.830996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.831012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.831022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.831045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.831056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.831065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.831075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.831092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.831102 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:15:40.831112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.831128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.831138 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.831152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 18:15:40.831163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 18:15:40.831173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.831189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 18:15:40.831200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 18:15:40.831215 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:15:40.831223 | orchestrator | 2025-08-29 18:15:40.831230 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-08-29 18:15:40.831238 | orchestrator | Friday 29 August 2025 18:13:41 +0000 (0:00:00.866) 0:02:33.128 ********* 2025-08-29 18:15:40.831250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.831258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.831267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.831280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'regist2025-08-29 18:15:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:15:40.831294 | orchestrator | ry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.831302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.831311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.831323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831410 | orchestrator | 2025-08-29 18:15:40.831418 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-08-29 18:15:40.831425 | orchestrator | Friday 29 August 2025 18:13:46 +0000 (0:00:05.438) 0:02:38.567 ********* 2025-08-29 18:15:40.831433 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 18:15:40.831442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 18:15:40.831449 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 18:15:40.831457 | orchestrator | 2025-08-29 18:15:40.831465 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-08-29 18:15:40.831473 | orchestrator | Friday 29 August 2025 18:13:48 +0000 (0:00:01.722) 0:02:40.290 ********* 2025-08-29 18:15:40.831491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.831500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.831512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.831521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.831529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.831537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.831554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.831641 | orchestrator | 2025-08-29 18:15:40.831649 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-08-29 18:15:40.831656 | orchestrator | Friday 29 August 2025 18:14:04 +0000 (0:00:16.357) 0:02:56.647 ********* 2025-08-29 18:15:40.831664 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.831672 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.831680 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.831688 | orchestrator | 2025-08-29 18:15:40.831696 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-08-29 18:15:40.831704 | orchestrator | Friday 29 August 2025 18:14:06 +0000 (0:00:01.535) 0:02:58.182 ********* 2025-08-29 18:15:40.831715 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831723 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831731 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831738 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.831746 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.831754 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.831762 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.831769 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.831777 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.831785 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 18:15:40.831792 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 18:15:40.831800 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 18:15:40.831815 | orchestrator | 2025-08-29 18:15:40.831839 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-08-29 18:15:40.831848 | orchestrator | Friday 29 August 2025 18:14:11 +0000 (0:00:05.315) 0:03:03.497 ********* 2025-08-29 18:15:40.831856 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831863 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831871 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831879 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.831886 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.831894 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.831902 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.831909 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.831917 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.831924 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 18:15:40.831932 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 18:15:40.831940 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 18:15:40.831947 | orchestrator | 2025-08-29 18:15:40.831955 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-08-29 18:15:40.831963 | orchestrator | Friday 29 August 2025 18:14:17 +0000 (0:00:05.235) 0:03:08.733 ********* 2025-08-29 18:15:40.831970 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831978 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831986 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 18:15:40.831993 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.832001 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.832008 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 18:15:40.832021 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.832029 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.832036 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 18:15:40.832044 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 18:15:40.832052 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 18:15:40.832059 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 18:15:40.832067 | orchestrator | 2025-08-29 18:15:40.832075 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-08-29 18:15:40.832082 | orchestrator | Friday 29 August 2025 18:14:22 +0000 (0:00:05.099) 0:03:13.833 ********* 2025-08-29 18:15:40.832090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.832103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.832117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 18:15:40.832125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.832138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.832146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 18:15:40.832154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 18:15:40.832250 | orchestrator | 2025-08-29 18:15:40.832257 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 18:15:40.832265 | orchestrator | Friday 29 August 2025 18:14:25 +0000 (0:00:03.585) 0:03:17.419 ********* 2025-08-29 18:15:40.832273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:15:40.832281 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:15:40.832289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:15:40.832296 | orchestrator | 2025-08-29 18:15:40.832304 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-08-29 18:15:40.832312 | orchestrator | Friday 29 August 2025 18:14:26 +0000 (0:00:00.359) 0:03:17.778 ********* 2025-08-29 18:15:40.832319 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832327 | orchestrator | 2025-08-29 18:15:40.832335 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-08-29 18:15:40.832343 | orchestrator | Friday 29 August 2025 18:14:28 +0000 (0:00:01.978) 0:03:19.756 ********* 2025-08-29 18:15:40.832350 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832358 | orchestrator | 2025-08-29 18:15:40.832366 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-08-29 18:15:40.832374 | orchestrator | Friday 29 August 2025 18:14:29 +0000 (0:00:01.926) 0:03:21.683 ********* 2025-08-29 18:15:40.832381 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832389 | orchestrator | 2025-08-29 18:15:40.832397 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-08-29 18:15:40.832404 | orchestrator | Friday 29 August 2025 18:14:32 +0000 (0:00:02.517) 0:03:24.200 ********* 2025-08-29 18:15:40.832412 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832420 | orchestrator | 2025-08-29 18:15:40.832427 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-08-29 18:15:40.832435 | orchestrator | Friday 29 August 2025 18:14:34 +0000 (0:00:02.047) 0:03:26.248 ********* 2025-08-29 18:15:40.832443 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832450 | orchestrator | 2025-08-29 18:15:40.832458 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 18:15:40.832466 | orchestrator | Friday 29 August 2025 18:14:53 +0000 (0:00:19.453) 0:03:45.702 ********* 2025-08-29 18:15:40.832473 | orchestrator | 2025-08-29 18:15:40.832481 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 18:15:40.832489 | orchestrator | Friday 29 August 2025 18:14:54 +0000 (0:00:00.083) 0:03:45.785 ********* 2025-08-29 18:15:40.832496 | orchestrator | 2025-08-29 18:15:40.832504 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 18:15:40.832516 | orchestrator | Friday 29 August 2025 18:14:54 +0000 (0:00:00.074) 0:03:45.860 ********* 2025-08-29 18:15:40.832524 | orchestrator | 2025-08-29 18:15:40.832532 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-08-29 18:15:40.832544 | orchestrator | Friday 29 August 2025 18:14:54 +0000 (0:00:00.075) 0:03:45.936 ********* 2025-08-29 18:15:40.832552 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832560 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.832568 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.832575 | orchestrator | 2025-08-29 18:15:40.832583 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-08-29 18:15:40.832591 | orchestrator | Friday 29 August 2025 18:15:06 +0000 (0:00:12.508) 0:03:58.444 ********* 2025-08-29 18:15:40.832599 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832606 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.832614 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.832622 | orchestrator | 2025-08-29 18:15:40.832630 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-08-29 18:15:40.832638 | orchestrator | Friday 29 August 2025 18:15:13 +0000 (0:00:06.636) 0:04:05.081 ********* 2025-08-29 18:15:40.832645 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832653 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.832661 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.832668 | orchestrator | 2025-08-29 18:15:40.832676 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-08-29 18:15:40.832684 | orchestrator | Friday 29 August 2025 18:15:24 +0000 (0:00:10.802) 0:04:15.884 ********* 2025-08-29 18:15:40.832692 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832699 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.832707 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.832715 | orchestrator | 2025-08-29 18:15:40.832722 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-08-29 18:15:40.832730 | orchestrator | Friday 29 August 2025 18:15:29 +0000 (0:00:05.172) 0:04:21.057 ********* 2025-08-29 18:15:40.832738 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:15:40.832746 | orchestrator | changed: [testbed-node-1] 2025-08-29 18:15:40.832753 | orchestrator | changed: [testbed-node-2] 2025-08-29 18:15:40.832761 | orchestrator | 2025-08-29 18:15:40.832769 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:15:40.832777 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 18:15:40.832789 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:15:40.832797 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 18:15:40.832805 | orchestrator | 2025-08-29 18:15:40.832812 | orchestrator | 2025-08-29 18:15:40.832820 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:15:40.832846 | orchestrator | Friday 29 August 2025 18:15:39 +0000 (0:00:10.629) 0:04:31.687 ********* 2025-08-29 18:15:40.832854 | orchestrator | =============================================================================== 2025-08-29 18:15:40.832862 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.45s 2025-08-29 18:15:40.832869 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.36s 2025-08-29 18:15:40.832877 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.44s 2025-08-29 18:15:40.832884 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.03s 2025-08-29 18:15:40.832892 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.51s 2025-08-29 18:15:40.832900 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.80s 2025-08-29 18:15:40.832907 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.63s 2025-08-29 18:15:40.832915 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.39s 2025-08-29 18:15:40.832927 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.84s 2025-08-29 18:15:40.832935 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.08s 2025-08-29 18:15:40.832943 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.65s 2025-08-29 18:15:40.832950 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.64s 2025-08-29 18:15:40.832958 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.21s 2025-08-29 18:15:40.832965 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.44s 2025-08-29 18:15:40.832973 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.41s 2025-08-29 18:15:40.832981 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.32s 2025-08-29 18:15:40.832988 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.30s 2025-08-29 18:15:40.832996 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.24s 2025-08-29 18:15:40.833003 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.17s 2025-08-29 18:15:40.833011 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.10s 2025-08-29 18:15:43.867632 | orchestrator | 2025-08-29 18:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:15:46.907589 | orchestrator | 2025-08-29 18:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:15:49.945207 | orchestrator | 2025-08-29 18:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:15:52.992757 | orchestrator | 2025-08-29 18:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:15:56.043992 | orchestrator | 2025-08-29 18:15:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:15:59.090649 | orchestrator | 2025-08-29 18:15:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:02.127041 | orchestrator | 2025-08-29 18:16:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:05.170377 | orchestrator | 2025-08-29 18:16:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:08.214505 | orchestrator | 2025-08-29 18:16:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:11.257533 | orchestrator | 2025-08-29 18:16:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:14.294977 | orchestrator | 2025-08-29 18:16:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:17.343172 | orchestrator | 2025-08-29 18:16:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:20.384720 | orchestrator | 2025-08-29 18:16:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:23.426319 | orchestrator | 2025-08-29 18:16:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:26.477143 | orchestrator | 2025-08-29 18:16:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:29.517062 | orchestrator | 2025-08-29 18:16:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:32.560298 | orchestrator | 2025-08-29 18:16:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:35.603297 | orchestrator | 2025-08-29 18:16:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:38.649503 | orchestrator | 2025-08-29 18:16:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 18:16:41.693105 | orchestrator | 2025-08-29 18:16:41.964409 | orchestrator | 2025-08-29 18:16:41.972385 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 18:16:41 UTC 2025 2025-08-29 18:16:41.972416 | orchestrator | 2025-08-29 18:16:42.305115 | orchestrator | ok: Runtime: 0:35:51.505264 2025-08-29 18:16:42.550017 | 2025-08-29 18:16:42.550197 | TASK [Bootstrap services] 2025-08-29 18:16:43.326153 | orchestrator | 2025-08-29 18:16:43.326311 | orchestrator | # BOOTSTRAP 2025-08-29 18:16:43.326323 | orchestrator | 2025-08-29 18:16:43.326332 | orchestrator | + set -e 2025-08-29 18:16:43.326340 | orchestrator | + echo 2025-08-29 18:16:43.326349 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 18:16:43.326360 | orchestrator | + echo 2025-08-29 18:16:43.326389 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 18:16:43.334050 | orchestrator | + set -e 2025-08-29 18:16:43.334066 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 18:16:47.908835 | orchestrator | 2025-08-29 18:16:47 | INFO  | It takes a moment until task f737b2b0-c855-4706-9a3a-81b80d231867 (flavor-manager) has been started and output is visible here. 2025-08-29 18:16:55.673111 | orchestrator | 2025-08-29 18:16:51 | INFO  | Flavor SCS-1V-4 created 2025-08-29 18:16:55.673322 | orchestrator | 2025-08-29 18:16:51 | INFO  | Flavor SCS-2V-8 created 2025-08-29 18:16:55.673342 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-4V-16 created 2025-08-29 18:16:55.673355 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-8V-32 created 2025-08-29 18:16:55.673366 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-1V-2 created 2025-08-29 18:16:55.673377 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-2V-4 created 2025-08-29 18:16:55.673389 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-4V-8 created 2025-08-29 18:16:55.673401 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-8V-16 created 2025-08-29 18:16:55.673428 | orchestrator | 2025-08-29 18:16:52 | INFO  | Flavor SCS-16V-32 created 2025-08-29 18:16:55.673440 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-1V-8 created 2025-08-29 18:16:55.673451 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-2V-16 created 2025-08-29 18:16:55.673462 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-4V-32 created 2025-08-29 18:16:55.673473 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-1L-1 created 2025-08-29 18:16:55.673484 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-2V-4-20s created 2025-08-29 18:16:55.673494 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-4V-16-100s created 2025-08-29 18:16:55.673505 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-1V-4-10 created 2025-08-29 18:16:55.673516 | orchestrator | 2025-08-29 18:16:53 | INFO  | Flavor SCS-2V-8-20 created 2025-08-29 18:16:55.673527 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-4V-16-50 created 2025-08-29 18:16:55.673538 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-8V-32-100 created 2025-08-29 18:16:55.673548 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-1V-2-5 created 2025-08-29 18:16:55.673559 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-2V-4-10 created 2025-08-29 18:16:55.673570 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-4V-8-20 created 2025-08-29 18:16:55.673581 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-8V-16-50 created 2025-08-29 18:16:55.673592 | orchestrator | 2025-08-29 18:16:54 | INFO  | Flavor SCS-16V-32-100 created 2025-08-29 18:16:55.673603 | orchestrator | 2025-08-29 18:16:55 | INFO  | Flavor SCS-1V-8-20 created 2025-08-29 18:16:55.673614 | orchestrator | 2025-08-29 18:16:55 | INFO  | Flavor SCS-2V-16-50 created 2025-08-29 18:16:55.673625 | orchestrator | 2025-08-29 18:16:55 | INFO  | Flavor SCS-4V-32-100 created 2025-08-29 18:16:55.673636 | orchestrator | 2025-08-29 18:16:55 | INFO  | Flavor SCS-1L-1-5 created 2025-08-29 18:16:57.861761 | orchestrator | 2025-08-29 18:16:57 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-08-29 18:17:08.043122 | orchestrator | 2025-08-29 18:17:08 | INFO  | Task 2898f90d-ad2f-4089-a376-60c5bfda98ee (bootstrap-basic) was prepared for execution. 2025-08-29 18:17:08.043310 | orchestrator | 2025-08-29 18:17:08 | INFO  | It takes a moment until task 2898f90d-ad2f-4089-a376-60c5bfda98ee (bootstrap-basic) has been started and output is visible here. 2025-08-29 18:18:08.254240 | orchestrator | 2025-08-29 18:18:08.254359 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-08-29 18:18:08.254378 | orchestrator | 2025-08-29 18:18:08.254391 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 18:18:08.254403 | orchestrator | Friday 29 August 2025 18:17:12 +0000 (0:00:00.088) 0:00:00.088 ********* 2025-08-29 18:18:08.254414 | orchestrator | ok: [localhost] 2025-08-29 18:18:08.254427 | orchestrator | 2025-08-29 18:18:08.254438 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-08-29 18:18:08.254451 | orchestrator | Friday 29 August 2025 18:17:14 +0000 (0:00:01.892) 0:00:01.981 ********* 2025-08-29 18:18:08.254462 | orchestrator | ok: [localhost] 2025-08-29 18:18:08.254473 | orchestrator | 2025-08-29 18:18:08.254484 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-08-29 18:18:08.254495 | orchestrator | Friday 29 August 2025 18:17:22 +0000 (0:00:08.428) 0:00:10.410 ********* 2025-08-29 18:18:08.254506 | orchestrator | changed: [localhost] 2025-08-29 18:18:08.254517 | orchestrator | 2025-08-29 18:18:08.254528 | orchestrator | TASK [Get volume type local] *************************************************** 2025-08-29 18:18:08.254538 | orchestrator | Friday 29 August 2025 18:17:30 +0000 (0:00:07.806) 0:00:18.217 ********* 2025-08-29 18:18:08.254550 | orchestrator | ok: [localhost] 2025-08-29 18:18:08.254561 | orchestrator | 2025-08-29 18:18:08.254572 | orchestrator | TASK [Create volume type local] ************************************************ 2025-08-29 18:18:08.254583 | orchestrator | Friday 29 August 2025 18:17:37 +0000 (0:00:07.572) 0:00:25.789 ********* 2025-08-29 18:18:08.254594 | orchestrator | changed: [localhost] 2025-08-29 18:18:08.254609 | orchestrator | 2025-08-29 18:18:08.254620 | orchestrator | TASK [Create public network] *************************************************** 2025-08-29 18:18:08.254631 | orchestrator | Friday 29 August 2025 18:17:44 +0000 (0:00:06.852) 0:00:32.641 ********* 2025-08-29 18:18:08.254642 | orchestrator | changed: [localhost] 2025-08-29 18:18:08.254653 | orchestrator | 2025-08-29 18:18:08.254664 | orchestrator | TASK [Set public network to default] ******************************************* 2025-08-29 18:18:08.254674 | orchestrator | Friday 29 August 2025 18:17:49 +0000 (0:00:05.006) 0:00:37.647 ********* 2025-08-29 18:18:08.254685 | orchestrator | changed: [localhost] 2025-08-29 18:18:08.254696 | orchestrator | 2025-08-29 18:18:08.254716 | orchestrator | TASK [Create public subnet] **************************************************** 2025-08-29 18:18:08.254728 | orchestrator | Friday 29 August 2025 18:17:56 +0000 (0:00:06.246) 0:00:43.894 ********* 2025-08-29 18:18:08.254738 | orchestrator | changed: [localhost] 2025-08-29 18:18:08.254749 | orchestrator | 2025-08-29 18:18:08.254760 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-08-29 18:18:08.254771 | orchestrator | Friday 29 August 2025 18:18:00 +0000 (0:00:04.594) 0:00:48.488 ********* 2025-08-29 18:18:08.254781 | orchestrator | changed: [localhost] 2025-08-29 18:18:08.254792 | orchestrator | 2025-08-29 18:18:08.254803 | orchestrator | TASK [Create manager role] ***************************************************** 2025-08-29 18:18:08.254814 | orchestrator | Friday 29 August 2025 18:18:04 +0000 (0:00:03.784) 0:00:52.273 ********* 2025-08-29 18:18:08.254825 | orchestrator | ok: [localhost] 2025-08-29 18:18:08.254836 | orchestrator | 2025-08-29 18:18:08.254847 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:18:08.254858 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:18:08.254870 | orchestrator | 2025-08-29 18:18:08.254881 | orchestrator | 2025-08-29 18:18:08.254892 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:18:08.254903 | orchestrator | Friday 29 August 2025 18:18:08 +0000 (0:00:03.573) 0:00:55.846 ********* 2025-08-29 18:18:08.254939 | orchestrator | =============================================================================== 2025-08-29 18:18:08.254951 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.43s 2025-08-29 18:18:08.254961 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.81s 2025-08-29 18:18:08.254972 | orchestrator | Get volume type local --------------------------------------------------- 7.57s 2025-08-29 18:18:08.254983 | orchestrator | Create volume type local ------------------------------------------------ 6.85s 2025-08-29 18:18:08.255019 | orchestrator | Set public network to default ------------------------------------------- 6.25s 2025-08-29 18:18:08.255031 | orchestrator | Create public network --------------------------------------------------- 5.01s 2025-08-29 18:18:08.255041 | orchestrator | Create public subnet ---------------------------------------------------- 4.59s 2025-08-29 18:18:08.255052 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.78s 2025-08-29 18:18:08.255063 | orchestrator | Create manager role ----------------------------------------------------- 3.57s 2025-08-29 18:18:08.255074 | orchestrator | Gathering Facts --------------------------------------------------------- 1.89s 2025-08-29 18:18:10.511480 | orchestrator | 2025-08-29 18:18:10 | INFO  | It takes a moment until task 5143a430-9576-4737-98fa-b5d09b83a591 (image-manager) has been started and output is visible here. 2025-08-29 18:18:51.407093 | orchestrator | 2025-08-29 18:18:14 | INFO  | Processing image 'Cirros 0.6.2' 2025-08-29 18:18:51.407217 | orchestrator | 2025-08-29 18:18:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-08-29 18:18:51.407238 | orchestrator | 2025-08-29 18:18:14 | INFO  | Importing image Cirros 0.6.2 2025-08-29 18:18:51.407251 | orchestrator | 2025-08-29 18:18:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 18:18:51.407263 | orchestrator | 2025-08-29 18:18:15 | INFO  | Waiting for image to leave queued state... 2025-08-29 18:18:51.407276 | orchestrator | 2025-08-29 18:18:17 | INFO  | Waiting for import to complete... 2025-08-29 18:18:51.407287 | orchestrator | 2025-08-29 18:18:28 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-08-29 18:18:51.407297 | orchestrator | 2025-08-29 18:18:28 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-08-29 18:18:51.407308 | orchestrator | 2025-08-29 18:18:28 | INFO  | Setting internal_version = 0.6.2 2025-08-29 18:18:51.407319 | orchestrator | 2025-08-29 18:18:28 | INFO  | Setting image_original_user = cirros 2025-08-29 18:18:51.407330 | orchestrator | 2025-08-29 18:18:28 | INFO  | Adding tag os:cirros 2025-08-29 18:18:51.407341 | orchestrator | 2025-08-29 18:18:28 | INFO  | Setting property architecture: x86_64 2025-08-29 18:18:51.407352 | orchestrator | 2025-08-29 18:18:29 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 18:18:51.407362 | orchestrator | 2025-08-29 18:18:29 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 18:18:51.407373 | orchestrator | 2025-08-29 18:18:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 18:18:51.407384 | orchestrator | 2025-08-29 18:18:29 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 18:18:51.407394 | orchestrator | 2025-08-29 18:18:29 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 18:18:51.407405 | orchestrator | 2025-08-29 18:18:30 | INFO  | Setting property os_distro: cirros 2025-08-29 18:18:51.407415 | orchestrator | 2025-08-29 18:18:30 | INFO  | Setting property replace_frequency: never 2025-08-29 18:18:51.407426 | orchestrator | 2025-08-29 18:18:30 | INFO  | Setting property uuid_validity: none 2025-08-29 18:18:51.407437 | orchestrator | 2025-08-29 18:18:30 | INFO  | Setting property provided_until: none 2025-08-29 18:18:51.407473 | orchestrator | 2025-08-29 18:18:30 | INFO  | Setting property image_description: Cirros 2025-08-29 18:18:51.407494 | orchestrator | 2025-08-29 18:18:31 | INFO  | Setting property image_name: Cirros 2025-08-29 18:18:51.407505 | orchestrator | 2025-08-29 18:18:31 | INFO  | Setting property internal_version: 0.6.2 2025-08-29 18:18:51.407521 | orchestrator | 2025-08-29 18:18:31 | INFO  | Setting property image_original_user: cirros 2025-08-29 18:18:51.407532 | orchestrator | 2025-08-29 18:18:31 | INFO  | Setting property os_version: 0.6.2 2025-08-29 18:18:51.407543 | orchestrator | 2025-08-29 18:18:31 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-08-29 18:18:51.407556 | orchestrator | 2025-08-29 18:18:32 | INFO  | Setting property image_build_date: 2023-05-30 2025-08-29 18:18:51.407568 | orchestrator | 2025-08-29 18:18:32 | INFO  | Checking status of 'Cirros 0.6.2' 2025-08-29 18:18:51.407580 | orchestrator | 2025-08-29 18:18:32 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-08-29 18:18:51.407592 | orchestrator | 2025-08-29 18:18:32 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-08-29 18:18:51.407604 | orchestrator | 2025-08-29 18:18:32 | INFO  | Processing image 'Cirros 0.6.3' 2025-08-29 18:18:51.407616 | orchestrator | 2025-08-29 18:18:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-08-29 18:18:51.407629 | orchestrator | 2025-08-29 18:18:32 | INFO  | Importing image Cirros 0.6.3 2025-08-29 18:18:51.407640 | orchestrator | 2025-08-29 18:18:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 18:18:51.407653 | orchestrator | 2025-08-29 18:18:33 | INFO  | Waiting for image to leave queued state... 2025-08-29 18:18:51.407664 | orchestrator | 2025-08-29 18:18:36 | INFO  | Waiting for import to complete... 2025-08-29 18:18:51.407676 | orchestrator | 2025-08-29 18:18:46 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-08-29 18:18:51.407706 | orchestrator | 2025-08-29 18:18:46 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-08-29 18:18:51.407719 | orchestrator | 2025-08-29 18:18:46 | INFO  | Setting internal_version = 0.6.3 2025-08-29 18:18:51.407731 | orchestrator | 2025-08-29 18:18:46 | INFO  | Setting image_original_user = cirros 2025-08-29 18:18:51.407743 | orchestrator | 2025-08-29 18:18:46 | INFO  | Adding tag os:cirros 2025-08-29 18:18:51.407756 | orchestrator | 2025-08-29 18:18:46 | INFO  | Setting property architecture: x86_64 2025-08-29 18:18:51.407767 | orchestrator | 2025-08-29 18:18:47 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 18:18:51.407778 | orchestrator | 2025-08-29 18:18:47 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 18:18:51.407788 | orchestrator | 2025-08-29 18:18:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 18:18:51.407799 | orchestrator | 2025-08-29 18:18:47 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 18:18:51.407810 | orchestrator | 2025-08-29 18:18:48 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 18:18:51.407821 | orchestrator | 2025-08-29 18:18:48 | INFO  | Setting property os_distro: cirros 2025-08-29 18:18:51.407832 | orchestrator | 2025-08-29 18:18:48 | INFO  | Setting property replace_frequency: never 2025-08-29 18:18:51.407843 | orchestrator | 2025-08-29 18:18:48 | INFO  | Setting property uuid_validity: none 2025-08-29 18:18:51.407861 | orchestrator | 2025-08-29 18:18:48 | INFO  | Setting property provided_until: none 2025-08-29 18:18:51.407872 | orchestrator | 2025-08-29 18:18:49 | INFO  | Setting property image_description: Cirros 2025-08-29 18:18:51.407883 | orchestrator | 2025-08-29 18:18:49 | INFO  | Setting property image_name: Cirros 2025-08-29 18:18:51.407894 | orchestrator | 2025-08-29 18:18:49 | INFO  | Setting property internal_version: 0.6.3 2025-08-29 18:18:51.407905 | orchestrator | 2025-08-29 18:18:49 | INFO  | Setting property image_original_user: cirros 2025-08-29 18:18:51.407915 | orchestrator | 2025-08-29 18:18:50 | INFO  | Setting property os_version: 0.6.3 2025-08-29 18:18:51.407926 | orchestrator | 2025-08-29 18:18:50 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-08-29 18:18:51.407937 | orchestrator | 2025-08-29 18:18:50 | INFO  | Setting property image_build_date: 2024-09-26 2025-08-29 18:18:51.407948 | orchestrator | 2025-08-29 18:18:50 | INFO  | Checking status of 'Cirros 0.6.3' 2025-08-29 18:18:51.407958 | orchestrator | 2025-08-29 18:18:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-08-29 18:18:51.407975 | orchestrator | 2025-08-29 18:18:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-08-29 18:18:51.708015 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-08-29 18:18:53.768714 | orchestrator | 2025-08-29 18:18:53 | INFO  | date: 2025-08-29 2025-08-29 18:18:53.768819 | orchestrator | 2025-08-29 18:18:53 | INFO  | image: octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 18:18:53.768999 | orchestrator | 2025-08-29 18:18:53 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 18:18:53.769076 | orchestrator | 2025-08-29 18:18:53 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2.CHECKSUM 2025-08-29 18:18:53.797429 | orchestrator | 2025-08-29 18:18:53 | INFO  | checksum: 9bd11944634778935b43eb626302bc74d657e4c319fdb6fd625fdfeb36ffc69d 2025-08-29 18:18:53.898199 | orchestrator | 2025-08-29 18:18:53 | INFO  | It takes a moment until task 9d099e46-c776-4bee-8687-d99d590efb49 (image-manager) has been started and output is visible here. 2025-08-29 18:19:54.281931 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-08-29 18:19:54.282167 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-08-29 18:19:54.282193 | orchestrator | 2025-08-29 18:18:55 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 18:19:54.282212 | orchestrator | 2025-08-29 18:18:55 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2: 200 2025-08-29 18:19:54.282226 | orchestrator | 2025-08-29 18:18:55 | INFO  | Importing image OpenStack Octavia Amphora 2025-08-29 2025-08-29 18:19:54.282238 | orchestrator | 2025-08-29 18:18:55 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 18:19:54.282251 | orchestrator | 2025-08-29 18:18:57 | INFO  | Waiting for image to leave queued state... 2025-08-29 18:19:54.282290 | orchestrator | 2025-08-29 18:18:59 | INFO  | Waiting for import to complete... 2025-08-29 18:19:54.282303 | orchestrator | 2025-08-29 18:19:09 | INFO  | Waiting for import to complete... 2025-08-29 18:19:54.282314 | orchestrator | 2025-08-29 18:19:19 | INFO  | Waiting for import to complete... 2025-08-29 18:19:54.282325 | orchestrator | 2025-08-29 18:19:29 | INFO  | Waiting for import to complete... 2025-08-29 18:19:54.282336 | orchestrator | 2025-08-29 18:19:39 | INFO  | Waiting for import to complete... 2025-08-29 18:19:54.282347 | orchestrator | 2025-08-29 18:19:49 | INFO  | Import of 'OpenStack Octavia Amphora 2025-08-29' successfully completed, reloading images 2025-08-29 18:19:54.282359 | orchestrator | 2025-08-29 18:19:50 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 18:19:54.282370 | orchestrator | 2025-08-29 18:19:50 | INFO  | Setting internal_version = 2025-08-29 2025-08-29 18:19:54.282381 | orchestrator | 2025-08-29 18:19:50 | INFO  | Setting image_original_user = ubuntu 2025-08-29 18:19:54.282392 | orchestrator | 2025-08-29 18:19:50 | INFO  | Adding tag amphora 2025-08-29 18:19:54.282403 | orchestrator | 2025-08-29 18:19:50 | INFO  | Adding tag os:ubuntu 2025-08-29 18:19:54.282414 | orchestrator | 2025-08-29 18:19:50 | INFO  | Setting property architecture: x86_64 2025-08-29 18:19:54.282425 | orchestrator | 2025-08-29 18:19:50 | INFO  | Setting property hw_disk_bus: scsi 2025-08-29 18:19:54.282436 | orchestrator | 2025-08-29 18:19:50 | INFO  | Setting property hw_rng_model: virtio 2025-08-29 18:19:54.282457 | orchestrator | 2025-08-29 18:19:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-08-29 18:19:54.282471 | orchestrator | 2025-08-29 18:19:51 | INFO  | Setting property hw_watchdog_action: reset 2025-08-29 18:19:54.282483 | orchestrator | 2025-08-29 18:19:51 | INFO  | Setting property hypervisor_type: qemu 2025-08-29 18:19:54.282496 | orchestrator | 2025-08-29 18:19:51 | INFO  | Setting property os_distro: ubuntu 2025-08-29 18:19:54.282508 | orchestrator | 2025-08-29 18:19:51 | INFO  | Setting property replace_frequency: quarterly 2025-08-29 18:19:54.282520 | orchestrator | 2025-08-29 18:19:52 | INFO  | Setting property uuid_validity: last-1 2025-08-29 18:19:54.282531 | orchestrator | 2025-08-29 18:19:52 | INFO  | Setting property provided_until: none 2025-08-29 18:19:54.282544 | orchestrator | 2025-08-29 18:19:52 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-08-29 18:19:54.282556 | orchestrator | 2025-08-29 18:19:52 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-08-29 18:19:54.282569 | orchestrator | 2025-08-29 18:19:52 | INFO  | Setting property internal_version: 2025-08-29 2025-08-29 18:19:54.282580 | orchestrator | 2025-08-29 18:19:53 | INFO  | Setting property image_original_user: ubuntu 2025-08-29 18:19:54.282592 | orchestrator | 2025-08-29 18:19:53 | INFO  | Setting property os_version: 2025-08-29 2025-08-29 18:19:54.282605 | orchestrator | 2025-08-29 18:19:53 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250829.qcow2 2025-08-29 18:19:54.282636 | orchestrator | 2025-08-29 18:19:53 | INFO  | Setting property image_build_date: 2025-08-29 2025-08-29 18:19:54.282648 | orchestrator | 2025-08-29 18:19:53 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 18:19:54.282661 | orchestrator | 2025-08-29 18:19:53 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-08-29' 2025-08-29 18:19:54.282681 | orchestrator | 2025-08-29 18:19:54 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-08-29 18:19:54.282693 | orchestrator | 2025-08-29 18:19:54 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-08-29 18:19:54.282706 | orchestrator | 2025-08-29 18:19:54 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-08-29 18:19:54.282718 | orchestrator | 2025-08-29 18:19:54 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-08-29 18:19:54.715401 | orchestrator | ok: Runtime: 0:03:11.618819 2025-08-29 18:19:54.729254 | 2025-08-29 18:19:54.729359 | TASK [Run checks] 2025-08-29 18:19:55.406610 | orchestrator | + set -e 2025-08-29 18:19:55.406722 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 18:19:55.406733 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 18:19:55.406741 | orchestrator | ++ INTERACTIVE=false 2025-08-29 18:19:55.406754 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 18:19:55.406759 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 18:19:55.407060 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 18:19:55.408009 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:19:55.411839 | orchestrator | 2025-08-29 18:19:55.411853 | orchestrator | # CHECK 2025-08-29 18:19:55.411858 | orchestrator | 2025-08-29 18:19:55.411863 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 18:19:55.411870 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 18:19:55.411874 | orchestrator | + echo 2025-08-29 18:19:55.411878 | orchestrator | + echo '# CHECK' 2025-08-29 18:19:55.411882 | orchestrator | + echo 2025-08-29 18:19:55.411888 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 18:19:55.412757 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 18:19:55.469627 | orchestrator | 2025-08-29 18:19:55.469650 | orchestrator | ## Containers @ testbed-manager 2025-08-29 18:19:55.469659 | orchestrator | 2025-08-29 18:19:55.469667 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 18:19:55.469675 | orchestrator | + echo 2025-08-29 18:19:55.469682 | orchestrator | + echo '## Containers @ testbed-manager' 2025-08-29 18:19:55.469690 | orchestrator | + echo 2025-08-29 18:19:55.469697 | orchestrator | + osism container testbed-manager ps 2025-08-29 18:19:57.765620 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 18:19:57.765824 | orchestrator | 01a2ded93a6b registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-08-29 18:19:57.765850 | orchestrator | 06832369c14f registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-08-29 18:19:57.765862 | orchestrator | f4b9f65b349c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-08-29 18:19:57.765880 | orchestrator | 29db66cf27aa registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-08-29 18:19:57.765892 | orchestrator | 29c89b32f121 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-08-29 18:19:57.765903 | orchestrator | 26817632d9ee registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-08-29 18:19:57.765918 | orchestrator | 45ec0ab7579a registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 18:19:57.765930 | orchestrator | 77e787b2475e registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 18:19:57.765967 | orchestrator | af099ac1f307 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 18:19:57.765979 | orchestrator | f6220f80dd1d phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-08-29 18:19:57.765990 | orchestrator | 9491a58201ce registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-08-29 18:19:57.766001 | orchestrator | 2915b78cb821 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-08-29 18:19:57.766012 | orchestrator | 57565374e941 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-08-29 18:19:57.766059 | orchestrator | dd89ac004b3e registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-08-29 18:19:57.766093 | orchestrator | b980dce64d80 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2025-08-29 18:19:57.766105 | orchestrator | 334544a0d26e registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2025-08-29 18:19:57.766140 | orchestrator | 829f3430d236 registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2025-08-29 18:19:57.766151 | orchestrator | 21b14d98cc3a registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2025-08-29 18:19:57.766162 | orchestrator | 0e2f9b0599c3 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-08-29 18:19:57.766174 | orchestrator | f790f5424cd9 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-08-29 18:19:57.766185 | orchestrator | 037c98e54e3c registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2025-08-29 18:19:57.766196 | orchestrator | 155956fd8959 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2025-08-29 18:19:57.766216 | orchestrator | af745c4ab6ee registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2025-08-29 18:19:57.766228 | orchestrator | 9ec23e03a47f registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2025-08-29 18:19:57.766239 | orchestrator | 491c93de9962 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-08-29 18:19:57.766250 | orchestrator | 9199f88746cc registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-08-29 18:19:57.766261 | orchestrator | 356a72b8f46d registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2025-08-29 18:19:57.766272 | orchestrator | a9c56202aa1c registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-08-29 18:19:58.036623 | orchestrator | 2025-08-29 18:19:58.036734 | orchestrator | ## Images @ testbed-manager 2025-08-29 18:19:58.036749 | orchestrator | 2025-08-29 18:19:58.036761 | orchestrator | + echo 2025-08-29 18:19:58.036773 | orchestrator | + echo '## Images @ testbed-manager' 2025-08-29 18:19:58.036786 | orchestrator | + echo 2025-08-29 18:19:58.036797 | orchestrator | + osism container testbed-manager images 2025-08-29 18:20:00.283565 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 18:20:00.283704 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 e303c4555969 11 hours ago 237MB 2025-08-29 18:20:00.283724 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 3 weeks ago 11.5MB 2025-08-29 18:20:00.283736 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 6 weeks ago 571MB 2025-08-29 18:20:00.283747 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 18:20:00.283778 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 18:20:00.283789 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 18:20:00.283800 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 6 weeks ago 891MB 2025-08-29 18:20:00.283811 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 6 weeks ago 360MB 2025-08-29 18:20:00.283822 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 18:20:00.283832 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 6 weeks ago 456MB 2025-08-29 18:20:00.283843 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 18:20:00.283853 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 6 weeks ago 575MB 2025-08-29 18:20:00.283882 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 6 weeks ago 535MB 2025-08-29 18:20:00.283893 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 6 weeks ago 308MB 2025-08-29 18:20:00.283904 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 6 weeks ago 1.21GB 2025-08-29 18:20:00.283915 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 7 weeks ago 310MB 2025-08-29 18:20:00.283925 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 7 weeks ago 41.4MB 2025-08-29 18:20:00.283936 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-08-29 18:20:00.283947 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 2 months ago 329MB 2025-08-29 18:20:00.283957 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 months ago 453MB 2025-08-29 18:20:00.283968 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-08-29 18:20:00.283978 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 11 months ago 300MB 2025-08-29 18:20:00.283989 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 14 months ago 146MB 2025-08-29 18:20:00.618989 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 18:20:00.619897 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 18:20:00.677438 | orchestrator | 2025-08-29 18:20:00.677484 | orchestrator | ## Containers @ testbed-node-0 2025-08-29 18:20:00.677495 | orchestrator | 2025-08-29 18:20:00.677506 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 18:20:00.677516 | orchestrator | + echo 2025-08-29 18:20:00.677526 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-08-29 18:20:00.677536 | orchestrator | + echo 2025-08-29 18:20:00.677546 | orchestrator | + osism container testbed-node-0 ps 2025-08-29 18:20:02.962204 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 18:20:02.962314 | orchestrator | 8c184e6dc1e2 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 18:20:02.962330 | orchestrator | 7df93a441029 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 18:20:02.962342 | orchestrator | d70a7df21012 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 18:20:02.962354 | orchestrator | 01963c9bf176 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-08-29 18:20:02.962364 | orchestrator | 889bb28d7d30 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 18:20:02.962376 | orchestrator | 893b2e2029c8 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-08-29 18:20:02.962387 | orchestrator | 0f484b892d14 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-08-29 18:20:02.962399 | orchestrator | b5b5d3639d91 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-08-29 18:20:02.962428 | orchestrator | e0fb1ed3e4ee registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-08-29 18:20:02.962440 | orchestrator | 26fac17baa4d registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-08-29 18:20:02.962451 | orchestrator | 4b9325107004 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-08-29 18:20:02.962462 | orchestrator | ae1f5b213bb1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-08-29 18:20:02.962472 | orchestrator | 2a9ba9a5484b registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-08-29 18:20:02.962483 | orchestrator | f459f7c772b9 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-08-29 18:20:02.962494 | orchestrator | aa34544cd2cf registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-08-29 18:20:02.962505 | orchestrator | 482970ccea1a registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 18:20:02.962516 | orchestrator | 3a5685b5f57c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-08-29 18:20:02.962527 | orchestrator | 160f086cc919 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_conductor 2025-08-29 18:20:02.962537 | orchestrator | 927ae5bedeb7 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-08-29 18:20:02.962565 | orchestrator | 03e4421664cd registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-08-29 18:20:02.962577 | orchestrator | cd317f0fba9a registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-08-29 18:20:02.962587 | orchestrator | 2152fc4de720 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-08-29 18:20:02.962598 | orchestrator | ddc02a3508ff registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 18:20:02.962612 | orchestrator | 7824737f25ca registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-08-29 18:20:02.962639 | orchestrator | 6f35be7a7c5e registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-08-29 18:20:02.962651 | orchestrator | b56460181b6c registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-08-29 18:20:02.962678 | orchestrator | bc017e37bf95 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-08-29 18:20:02.962691 | orchestrator | b43539a199e0 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-08-29 18:20:02.962704 | orchestrator | 27de2302907d registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-08-29 18:20:02.962717 | orchestrator | 5996516d9fcb registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-08-29 18:20:02.962733 | orchestrator | 3c9b54e89184 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-08-29 18:20:02.962746 | orchestrator | 6c31cd652dd5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-08-29 18:20:02.962759 | orchestrator | 160ab885d78f registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-08-29 18:20:02.962772 | orchestrator | b04593fc72ce registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-08-29 18:20:02.962784 | orchestrator | 9a866401424b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-08-29 18:20:02.962797 | orchestrator | 14f771cdd6ae registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-08-29 18:20:02.962809 | orchestrator | 8ec353734443 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-08-29 18:20:02.962826 | orchestrator | ad2bb4c17e60 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-08-29 18:20:02.962839 | orchestrator | 89cb14939f98 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-08-29 18:20:02.962852 | orchestrator | 6675a62b441d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-08-29 18:20:02.962873 | orchestrator | 029a8f5c6596 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 18:20:02.962887 | orchestrator | ac27f8650a44 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 18:20:02.962899 | orchestrator | 8fce724ba480 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-08-29 18:20:02.962911 | orchestrator | af28f9b88464 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 18:20:02.962930 | orchestrator | c4eb5caeb7e7 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 18:20:02.962943 | orchestrator | a9d7ec918a11 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-08-29 18:20:02.962956 | orchestrator | 37d3d31bc1d1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-08-29 18:20:02.962967 | orchestrator | dbbf7fde937c registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-08-29 18:20:02.962978 | orchestrator | e962bb32fb64 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-08-29 18:20:02.962989 | orchestrator | 8b57e90c7309 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 18:20:02.963000 | orchestrator | bad20190445c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 18:20:02.963011 | orchestrator | f8748c25765f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 18:20:02.963022 | orchestrator | 7c71764890df registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 18:20:02.963032 | orchestrator | c86e901c6c0c registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-08-29 18:20:02.963043 | orchestrator | 80c294246924 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 18:20:02.963054 | orchestrator | 98b597366c99 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-08-29 18:20:02.963064 | orchestrator | d0ab2a516b09 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-08-29 18:20:03.323587 | orchestrator | 2025-08-29 18:20:03.323662 | orchestrator | ## Images @ testbed-node-0 2025-08-29 18:20:03.323676 | orchestrator | 2025-08-29 18:20:03.323687 | orchestrator | + echo 2025-08-29 18:20:03.323698 | orchestrator | + echo '## Images @ testbed-node-0' 2025-08-29 18:20:03.323711 | orchestrator | + echo 2025-08-29 18:20:03.323722 | orchestrator | + osism container testbed-node-0 images 2025-08-29 18:20:05.583859 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 18:20:05.583961 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 18:20:05.583977 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 18:20:05.583988 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 18:20:05.583999 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 18:20:05.584495 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 18:20:05.584536 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 18:20:05.584549 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 18:20:05.584561 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 18:20:05.584574 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 18:20:05.584586 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 18:20:05.584611 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 18:20:05.584624 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 18:20:05.584637 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 18:20:05.584650 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 18:20:05.584662 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 18:20:05.584674 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 18:20:05.584687 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 18:20:05.584699 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 18:20:05.584711 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 18:20:05.584723 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 18:20:05.584736 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 18:20:05.584748 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 18:20:05.584761 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 18:20:05.584771 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 18:20:05.584782 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 18:20:05.584793 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 18:20:05.584803 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 6 weeks ago 1.04GB 2025-08-29 18:20:05.584814 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 6 weeks ago 1.04GB 2025-08-29 18:20:05.584825 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 18:20:05.584835 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 18:20:05.584846 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 18:20:05.584883 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 18:20:05.584895 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 18:20:05.584906 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 18:20:05.584916 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 18:20:05.584933 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 18:20:05.584944 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 18:20:05.584955 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 18:20:05.584965 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 18:20:05.584976 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 18:20:05.584986 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 18:20:05.584997 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 18:20:05.585007 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 18:20:05.585018 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 18:20:05.585028 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 18:20:05.585039 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 18:20:05.585049 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 18:20:05.585060 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 18:20:05.585070 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 18:20:05.585081 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 18:20:05.585091 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 18:20:05.585102 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 18:20:05.585112 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 6 weeks ago 1.11GB 2025-08-29 18:20:05.585151 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 6 weeks ago 1.11GB 2025-08-29 18:20:05.585163 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 18:20:05.585173 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 18:20:05.585183 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 18:20:05.585194 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 18:20:05.585212 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 6 weeks ago 1.04GB 2025-08-29 18:20:05.585222 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 6 weeks ago 1.04GB 2025-08-29 18:20:05.585233 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 6 weeks ago 1.04GB 2025-08-29 18:20:05.585243 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 6 weeks ago 1.04GB 2025-08-29 18:20:05.585254 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 18:20:05.880160 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 18:20:05.880524 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 18:20:05.946836 | orchestrator | 2025-08-29 18:20:05.946890 | orchestrator | ## Containers @ testbed-node-1 2025-08-29 18:20:05.946904 | orchestrator | 2025-08-29 18:20:05.946915 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 18:20:05.946927 | orchestrator | + echo 2025-08-29 18:20:05.946939 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-08-29 18:20:05.946950 | orchestrator | + echo 2025-08-29 18:20:05.946961 | orchestrator | + osism container testbed-node-1 ps 2025-08-29 18:20:08.223688 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 18:20:08.223775 | orchestrator | 247c48802451 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 18:20:08.223790 | orchestrator | cdde4a8c90ff registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 18:20:08.223802 | orchestrator | 6e5a5d2d0bc2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 18:20:08.223813 | orchestrator | 0d8b62659a45 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-08-29 18:20:08.223824 | orchestrator | 4eb280e36bb6 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 18:20:08.223835 | orchestrator | be78b1d29f46 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-08-29 18:20:08.223846 | orchestrator | 1c97e412e974 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-08-29 18:20:08.223856 | orchestrator | 476ad89bd25d registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-08-29 18:20:08.223867 | orchestrator | d3cd9b048619 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-08-29 18:20:08.223878 | orchestrator | c50bcbf6b30a registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-08-29 18:20:08.223888 | orchestrator | e875652c73a3 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-08-29 18:20:08.223923 | orchestrator | 775c12e8d0d5 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-08-29 18:20:08.223934 | orchestrator | 3b3fc42dcb7b registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-08-29 18:20:08.223944 | orchestrator | 93dc9aaedccc registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-08-29 18:20:08.223955 | orchestrator | 15c4aea7eb49 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-08-29 18:20:08.223966 | orchestrator | 28a2098cf52c registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 18:20:08.223976 | orchestrator | 87ae2c8f2e23 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-08-29 18:20:08.223987 | orchestrator | e953cd50763a registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-08-29 18:20:08.224016 | orchestrator | fdb992629125 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-08-29 18:20:08.224044 | orchestrator | 4f99a8e147c6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-08-29 18:20:08.224055 | orchestrator | f5f02a9523d4 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_api 2025-08-29 18:20:08.224066 | orchestrator | 23fd80168855 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-08-29 18:20:08.224077 | orchestrator | 6012c3c3cca2 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 18:20:08.224088 | orchestrator | 704a1980edc6 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-08-29 18:20:08.224099 | orchestrator | 539d5af64c4e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-08-29 18:20:08.224111 | orchestrator | 3761384c0e74 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-08-29 18:20:08.224172 | orchestrator | 39af3d123602 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-08-29 18:20:08.224186 | orchestrator | e781eb5513d4 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-08-29 18:20:08.224197 | orchestrator | fa34decce207 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-08-29 18:20:08.224216 | orchestrator | 19a5010e0f6c registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-08-29 18:20:08.224226 | orchestrator | 4670a7812494 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-08-29 18:20:08.224237 | orchestrator | a1037f7ec0cf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-08-29 18:20:08.224247 | orchestrator | fc5524a1ea2c registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-08-29 18:20:08.224258 | orchestrator | 8179f04faa16 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-08-29 18:20:08.224269 | orchestrator | dc0dab6d393b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-08-29 18:20:08.224279 | orchestrator | 4728c96d31f8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-08-29 18:20:08.224290 | orchestrator | 58829e425519 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-08-29 18:20:08.224300 | orchestrator | a056d1fa2360 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-08-29 18:20:08.224311 | orchestrator | 031823ab75e8 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-08-29 18:20:08.224321 | orchestrator | 45bc8b1cf6d4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-08-29 18:20:08.224340 | orchestrator | 0614f65b7541 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 18:20:08.224357 | orchestrator | b0a2f723ce06 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 18:20:08.224369 | orchestrator | 5b3609f4e679 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-08-29 18:20:08.224380 | orchestrator | 6a40b4ea7904 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 18:20:08.224430 | orchestrator | 1cab684033dc registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 18:20:08.224443 | orchestrator | 443d7c731ed4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 18:20:08.224454 | orchestrator | f7e9dac6d718 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 18:20:08.224464 | orchestrator | 99739c7737cb registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-08-29 18:20:08.224482 | orchestrator | 067c2ffe8f41 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-08-29 18:20:08.224493 | orchestrator | df76edeb2b7e registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-08-29 18:20:08.224504 | orchestrator | 07abf4a463af registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 18:20:08.224514 | orchestrator | ca3c850b3952 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 18:20:08.224525 | orchestrator | cc0e3b33dbec registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 18:20:08.224536 | orchestrator | a7924cec0484 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-08-29 18:20:08.224546 | orchestrator | 172e77dfd555 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 18:20:08.224557 | orchestrator | 30ad961c5aba registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 18:20:08.224567 | orchestrator | 4a3a74250c61 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 18:20:08.513056 | orchestrator | 2025-08-29 18:20:08.513177 | orchestrator | ## Images @ testbed-node-1 2025-08-29 18:20:08.513195 | orchestrator | 2025-08-29 18:20:08.513207 | orchestrator | + echo 2025-08-29 18:20:08.513219 | orchestrator | + echo '## Images @ testbed-node-1' 2025-08-29 18:20:08.513231 | orchestrator | + echo 2025-08-29 18:20:08.513242 | orchestrator | + osism container testbed-node-1 images 2025-08-29 18:20:10.729942 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 18:20:10.730163 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 18:20:10.730190 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 18:20:10.730246 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 18:20:10.730258 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 18:20:10.730268 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 18:20:10.730278 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 18:20:10.730288 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 18:20:10.730298 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 18:20:10.730307 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 18:20:10.730317 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 18:20:10.730349 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 18:20:10.730360 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 18:20:10.730369 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 18:20:10.730378 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 18:20:10.730404 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 18:20:10.730414 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 18:20:10.730423 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 18:20:10.730433 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 18:20:10.730442 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 18:20:10.730452 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 18:20:10.730461 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 18:20:10.730471 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 18:20:10.730480 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 18:20:10.730489 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 18:20:10.730505 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 18:20:10.730516 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 18:20:10.730544 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 18:20:10.730555 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 18:20:10.730565 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 18:20:10.730576 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 18:20:10.730587 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 18:20:10.730618 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 18:20:10.730630 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 18:20:10.730641 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 18:20:10.730651 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 18:20:10.730662 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 18:20:10.730680 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 18:20:10.730691 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 18:20:10.730702 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 18:20:10.730713 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 18:20:10.730723 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 18:20:10.730798 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 18:20:10.730857 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 18:20:10.730869 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 18:20:10.730880 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 18:20:10.730889 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 18:20:10.730899 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 18:20:10.730908 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 18:20:10.730918 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 18:20:10.730927 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 18:20:10.730936 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 18:20:10.730946 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 18:20:10.730955 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 18:20:10.730965 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 18:20:10.730974 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 18:20:11.020501 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-08-29 18:20:11.020976 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 18:20:11.078349 | orchestrator | 2025-08-29 18:20:11.078389 | orchestrator | ## Containers @ testbed-node-2 2025-08-29 18:20:11.078402 | orchestrator | 2025-08-29 18:20:11.078413 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 18:20:11.078424 | orchestrator | + echo 2025-08-29 18:20:11.078435 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-08-29 18:20:11.078446 | orchestrator | + echo 2025-08-29 18:20:11.078457 | orchestrator | + osism container testbed-node-2 ps 2025-08-29 18:20:13.391079 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-08-29 18:20:13.391253 | orchestrator | ec77899de477 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-08-29 18:20:13.391271 | orchestrator | f071e51c59ec registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-08-29 18:20:13.391303 | orchestrator | df50fcab5005 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-08-29 18:20:13.391315 | orchestrator | 3b750496f5e8 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-08-29 18:20:13.391325 | orchestrator | 3988e5d57792 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-08-29 18:20:13.391336 | orchestrator | b147e9b6811f registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-08-29 18:20:13.391347 | orchestrator | 68761462bd24 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-08-29 18:20:13.391357 | orchestrator | ae6546cc50b3 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-08-29 18:20:13.391368 | orchestrator | 6b200f6d98c9 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-08-29 18:20:13.391378 | orchestrator | 662c0bed9201 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-08-29 18:20:13.391389 | orchestrator | 062f6fa26c37 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-08-29 18:20:13.391399 | orchestrator | 6bd40bf671f4 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-08-29 18:20:13.391410 | orchestrator | 916856c31310 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-08-29 18:20:13.391420 | orchestrator | dab4f675d989 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-08-29 18:20:13.391431 | orchestrator | ce12c961724b registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-08-29 18:20:13.391441 | orchestrator | cfc99827af31 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-08-29 18:20:13.391452 | orchestrator | 7af64c4afd93 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-08-29 18:20:13.391462 | orchestrator | d19413cc4fcc registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-08-29 18:20:13.391472 | orchestrator | e45c96174ad7 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-08-29 18:20:13.391499 | orchestrator | 3f64616eae22 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-08-29 18:20:13.391522 | orchestrator | 1f188726d7dc registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-08-29 18:20:13.391533 | orchestrator | 913758e20ee0 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-08-29 18:20:13.391544 | orchestrator | a7aa0fb05af0 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-08-29 18:20:13.391554 | orchestrator | 0d981b421730 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-08-29 18:20:13.391564 | orchestrator | 030cd6167ee9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-08-29 18:20:13.391576 | orchestrator | 96e3423ca4c6 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-08-29 18:20:13.391586 | orchestrator | 35b0a0cde2df registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-08-29 18:20:13.391597 | orchestrator | 3a53f92e4e71 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-08-29 18:20:13.391608 | orchestrator | 56946138bfe9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-08-29 18:20:13.391619 | orchestrator | e6b07fe98830 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-08-29 18:20:13.391629 | orchestrator | b06aba8a1983 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-08-29 18:20:13.391640 | orchestrator | 3d9c9128cbf6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-08-29 18:20:13.391650 | orchestrator | bc7e290e36cb registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-08-29 18:20:13.391666 | orchestrator | c3bd7585b0a9 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-08-29 18:20:13.391677 | orchestrator | 76f51c074753 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-08-29 18:20:13.391688 | orchestrator | 47ab618978aa registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-08-29 18:20:13.391698 | orchestrator | e2e7e0407f0d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-08-29 18:20:13.391709 | orchestrator | e5e281811f22 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-08-29 18:20:13.391726 | orchestrator | 92a628a764fd registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-08-29 18:20:13.391737 | orchestrator | b5bb64371fe5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-08-29 18:20:13.391759 | orchestrator | ac479002c06d registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-08-29 18:20:13.391771 | orchestrator | 2e69101e47e8 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-08-29 18:20:13.391782 | orchestrator | 0f47e4eac1a2 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-08-29 18:20:13.391793 | orchestrator | 1ea6e2a189f4 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-08-29 18:20:13.391803 | orchestrator | e35c54063bc1 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-08-29 18:20:13.391814 | orchestrator | 24bac0631170 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-08-29 18:20:13.391824 | orchestrator | 8b2d1e2e8035 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-08-29 18:20:13.391835 | orchestrator | 91995ea39f1f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-08-29 18:20:13.391845 | orchestrator | e47054f05e4a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-08-29 18:20:13.391856 | orchestrator | ebd60f4c4e41 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-08-29 18:20:13.391866 | orchestrator | 03ff156aaf83 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-08-29 18:20:13.391877 | orchestrator | c6854ceb64f0 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-08-29 18:20:13.391887 | orchestrator | a28900c52734 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-08-29 18:20:13.391898 | orchestrator | 11ca7a938195 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-08-29 18:20:13.391908 | orchestrator | ed8509050587 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-08-29 18:20:13.391919 | orchestrator | fc5f8ed4a1ca registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-08-29 18:20:13.391929 | orchestrator | 4316f84a67a4 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-08-29 18:20:13.702524 | orchestrator | 2025-08-29 18:20:13.702622 | orchestrator | ## Images @ testbed-node-2 2025-08-29 18:20:13.702637 | orchestrator | 2025-08-29 18:20:13.702651 | orchestrator | + echo 2025-08-29 18:20:13.702664 | orchestrator | + echo '## Images @ testbed-node-2' 2025-08-29 18:20:13.702677 | orchestrator | + echo 2025-08-29 18:20:13.702688 | orchestrator | + osism container testbed-node-2 images 2025-08-29 18:20:15.991944 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-08-29 18:20:15.992067 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 6 weeks ago 628MB 2025-08-29 18:20:15.992082 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 6 weeks ago 329MB 2025-08-29 18:20:15.992162 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 6 weeks ago 326MB 2025-08-29 18:20:15.992177 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 6 weeks ago 1.59GB 2025-08-29 18:20:15.992188 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 6 weeks ago 1.55GB 2025-08-29 18:20:15.992199 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 6 weeks ago 417MB 2025-08-29 18:20:15.992210 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 6 weeks ago 318MB 2025-08-29 18:20:15.992221 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 6 weeks ago 746MB 2025-08-29 18:20:15.992231 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 6 weeks ago 375MB 2025-08-29 18:20:15.992242 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 6 weeks ago 1.01GB 2025-08-29 18:20:15.992252 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 6 weeks ago 318MB 2025-08-29 18:20:15.992263 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 6 weeks ago 361MB 2025-08-29 18:20:15.992274 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 6 weeks ago 361MB 2025-08-29 18:20:15.992284 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 6 weeks ago 1.21GB 2025-08-29 18:20:15.992295 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 6 weeks ago 353MB 2025-08-29 18:20:15.992305 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 6 weeks ago 410MB 2025-08-29 18:20:15.992316 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 6 weeks ago 344MB 2025-08-29 18:20:15.992326 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 6 weeks ago 358MB 2025-08-29 18:20:15.992337 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 6 weeks ago 324MB 2025-08-29 18:20:15.992367 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 6 weeks ago 351MB 2025-08-29 18:20:15.992378 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 6 weeks ago 324MB 2025-08-29 18:20:15.992389 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 6 weeks ago 590MB 2025-08-29 18:20:15.992399 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 6 weeks ago 947MB 2025-08-29 18:20:15.992431 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 6 weeks ago 946MB 2025-08-29 18:20:15.992442 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 6 weeks ago 947MB 2025-08-29 18:20:15.992453 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 6 weeks ago 946MB 2025-08-29 18:20:15.992463 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 6 weeks ago 1.1GB 2025-08-29 18:20:15.992474 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 6 weeks ago 1.1GB 2025-08-29 18:20:15.992486 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 6 weeks ago 1.12GB 2025-08-29 18:20:15.992498 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 6 weeks ago 1.1GB 2025-08-29 18:20:15.992510 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 6 weeks ago 1.12GB 2025-08-29 18:20:15.992540 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 6 weeks ago 1.15GB 2025-08-29 18:20:15.992554 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 6 weeks ago 1.04GB 2025-08-29 18:20:15.992567 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 6 weeks ago 1.06GB 2025-08-29 18:20:15.992579 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 6 weeks ago 1.06GB 2025-08-29 18:20:15.992591 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 6 weeks ago 1.06GB 2025-08-29 18:20:15.992603 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 6 weeks ago 1.41GB 2025-08-29 18:20:15.992621 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 6 weeks ago 1.41GB 2025-08-29 18:20:15.992633 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 6 weeks ago 1.29GB 2025-08-29 18:20:15.992645 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 6 weeks ago 1.42GB 2025-08-29 18:20:15.992657 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 6 weeks ago 1.29GB 2025-08-29 18:20:15.992669 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 6 weeks ago 1.29GB 2025-08-29 18:20:15.992681 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 6 weeks ago 1.2GB 2025-08-29 18:20:15.992693 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 6 weeks ago 1.31GB 2025-08-29 18:20:15.992705 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 6 weeks ago 1.05GB 2025-08-29 18:20:15.992717 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 6 weeks ago 1.05GB 2025-08-29 18:20:15.992729 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 6 weeks ago 1.05GB 2025-08-29 18:20:15.992741 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 6 weeks ago 1.06GB 2025-08-29 18:20:15.992754 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 6 weeks ago 1.06GB 2025-08-29 18:20:15.992775 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 6 weeks ago 1.05GB 2025-08-29 18:20:15.992787 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 6 weeks ago 1.11GB 2025-08-29 18:20:15.992799 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 6 weeks ago 1.13GB 2025-08-29 18:20:15.992811 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 6 weeks ago 1.11GB 2025-08-29 18:20:15.992824 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 6 weeks ago 1.24GB 2025-08-29 18:20:15.992836 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 months ago 1.27GB 2025-08-29 18:20:16.300857 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-08-29 18:20:16.309716 | orchestrator | + set -e 2025-08-29 18:20:16.309760 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 18:20:16.311058 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 18:20:16.311081 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 18:20:16.311092 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 18:20:16.311103 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 18:20:16.311114 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 18:20:16.311126 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 18:20:16.311182 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 18:20:16.311193 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 18:20:16.311204 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 18:20:16.311215 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 18:20:16.311225 | orchestrator | ++ export ARA=false 2025-08-29 18:20:16.311236 | orchestrator | ++ ARA=false 2025-08-29 18:20:16.311246 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 18:20:16.311257 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 18:20:16.311268 | orchestrator | ++ export TEMPEST=false 2025-08-29 18:20:16.311278 | orchestrator | ++ TEMPEST=false 2025-08-29 18:20:16.311289 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 18:20:16.311299 | orchestrator | ++ IS_ZUUL=true 2025-08-29 18:20:16.311310 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 18:20:16.311326 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 18:20:16.311337 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 18:20:16.311348 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 18:20:16.311359 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 18:20:16.311369 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 18:20:16.311380 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 18:20:16.311391 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 18:20:16.311401 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 18:20:16.311411 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 18:20:16.311422 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 18:20:16.311433 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-08-29 18:20:16.318194 | orchestrator | + set -e 2025-08-29 18:20:16.318658 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 18:20:16.318679 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 18:20:16.318690 | orchestrator | ++ INTERACTIVE=false 2025-08-29 18:20:16.318700 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 18:20:16.318711 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 18:20:16.318721 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 18:20:16.319393 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:20:16.326479 | orchestrator | 2025-08-29 18:20:16.326502 | orchestrator | # Ceph status 2025-08-29 18:20:16.326513 | orchestrator | 2025-08-29 18:20:16.326524 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 18:20:16.326535 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 18:20:16.326546 | orchestrator | + echo 2025-08-29 18:20:16.326556 | orchestrator | + echo '# Ceph status' 2025-08-29 18:20:16.326567 | orchestrator | + echo 2025-08-29 18:20:16.326577 | orchestrator | + ceph -s 2025-08-29 18:20:16.907609 | orchestrator | cluster: 2025-08-29 18:20:16.907714 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-08-29 18:20:16.907729 | orchestrator | health: HEALTH_OK 2025-08-29 18:20:16.907741 | orchestrator | 2025-08-29 18:20:16.907752 | orchestrator | services: 2025-08-29 18:20:16.907789 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-08-29 18:20:16.907814 | orchestrator | mgr: testbed-node-0(active, since 16m), standbys: testbed-node-1, testbed-node-2 2025-08-29 18:20:16.907827 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-08-29 18:20:16.907838 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-08-29 18:20:16.907850 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-08-29 18:20:16.907861 | orchestrator | 2025-08-29 18:20:16.907872 | orchestrator | data: 2025-08-29 18:20:16.907883 | orchestrator | volumes: 1/1 healthy 2025-08-29 18:20:16.907894 | orchestrator | pools: 14 pools, 401 pgs 2025-08-29 18:20:16.907905 | orchestrator | objects: 523 objects, 2.2 GiB 2025-08-29 18:20:16.907916 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-08-29 18:20:16.907928 | orchestrator | pgs: 401 active+clean 2025-08-29 18:20:16.907939 | orchestrator | 2025-08-29 18:20:16.949852 | orchestrator | 2025-08-29 18:20:16.949891 | orchestrator | # Ceph versions 2025-08-29 18:20:16.949903 | orchestrator | 2025-08-29 18:20:16.949914 | orchestrator | + echo 2025-08-29 18:20:16.949924 | orchestrator | + echo '# Ceph versions' 2025-08-29 18:20:16.949935 | orchestrator | + echo 2025-08-29 18:20:16.949945 | orchestrator | + ceph versions 2025-08-29 18:20:17.559853 | orchestrator | { 2025-08-29 18:20:17.559942 | orchestrator | "mon": { 2025-08-29 18:20:17.559959 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 18:20:17.559971 | orchestrator | }, 2025-08-29 18:20:17.559982 | orchestrator | "mgr": { 2025-08-29 18:20:17.559993 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 18:20:17.560004 | orchestrator | }, 2025-08-29 18:20:17.560015 | orchestrator | "osd": { 2025-08-29 18:20:17.560026 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-08-29 18:20:17.560036 | orchestrator | }, 2025-08-29 18:20:17.560046 | orchestrator | "mds": { 2025-08-29 18:20:17.560057 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 18:20:17.560068 | orchestrator | }, 2025-08-29 18:20:17.560078 | orchestrator | "rgw": { 2025-08-29 18:20:17.560088 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-08-29 18:20:17.560099 | orchestrator | }, 2025-08-29 18:20:17.560110 | orchestrator | "overall": { 2025-08-29 18:20:17.560120 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-08-29 18:20:17.560178 | orchestrator | } 2025-08-29 18:20:17.560191 | orchestrator | } 2025-08-29 18:20:17.601253 | orchestrator | 2025-08-29 18:20:17.601288 | orchestrator | # Ceph OSD tree 2025-08-29 18:20:17.601305 | orchestrator | 2025-08-29 18:20:17.601323 | orchestrator | + echo 2025-08-29 18:20:17.601343 | orchestrator | + echo '# Ceph OSD tree' 2025-08-29 18:20:17.601362 | orchestrator | + echo 2025-08-29 18:20:17.601380 | orchestrator | + ceph osd df tree 2025-08-29 18:20:18.145700 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-08-29 18:20:18.145847 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-08-29 18:20:18.145861 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-08-29 18:20:18.145872 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 908 MiB 834 MiB 1 KiB 74 MiB 19 GiB 4.44 0.75 174 up osd.0 2025-08-29 18:20:18.145883 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.39 1.25 218 up osd.3 2025-08-29 18:20:18.145894 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-08-29 18:20:18.145920 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.51 1.10 197 up osd.1 2025-08-29 18:20:18.145932 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 70 MiB 19 GiB 5.32 0.90 191 up osd.5 2025-08-29 18:20:18.145942 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2025-08-29 18:20:18.145979 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.51 1.27 195 up osd.2 2025-08-29 18:20:18.145991 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 884 MiB 811 MiB 1 KiB 74 MiB 19 GiB 4.32 0.73 195 up osd.4 2025-08-29 18:20:18.146002 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-08-29 18:20:18.146013 | orchestrator | MIN/MAX VAR: 0.73/1.27 STDDEV: 1.30 2025-08-29 18:20:18.194771 | orchestrator | 2025-08-29 18:20:18.194805 | orchestrator | # Ceph monitor status 2025-08-29 18:20:18.194818 | orchestrator | 2025-08-29 18:20:18.194829 | orchestrator | + echo 2025-08-29 18:20:18.194840 | orchestrator | + echo '# Ceph monitor status' 2025-08-29 18:20:18.194851 | orchestrator | + echo 2025-08-29 18:20:18.194862 | orchestrator | + ceph mon stat 2025-08-29 18:20:18.826622 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-08-29 18:20:18.868082 | orchestrator | 2025-08-29 18:20:18.868185 | orchestrator | # Ceph quorum status 2025-08-29 18:20:18.868210 | orchestrator | 2025-08-29 18:20:18.868228 | orchestrator | + echo 2025-08-29 18:20:18.868249 | orchestrator | + echo '# Ceph quorum status' 2025-08-29 18:20:18.868262 | orchestrator | + echo 2025-08-29 18:20:18.869183 | orchestrator | + ceph quorum_status 2025-08-29 18:20:18.869208 | orchestrator | + jq 2025-08-29 18:20:19.568407 | orchestrator | { 2025-08-29 18:20:19.568702 | orchestrator | "election_epoch": 8, 2025-08-29 18:20:19.568722 | orchestrator | "quorum": [ 2025-08-29 18:20:19.568733 | orchestrator | 0, 2025-08-29 18:20:19.568743 | orchestrator | 1, 2025-08-29 18:20:19.568753 | orchestrator | 2 2025-08-29 18:20:19.568762 | orchestrator | ], 2025-08-29 18:20:19.568772 | orchestrator | "quorum_names": [ 2025-08-29 18:20:19.568782 | orchestrator | "testbed-node-0", 2025-08-29 18:20:19.568792 | orchestrator | "testbed-node-1", 2025-08-29 18:20:19.568802 | orchestrator | "testbed-node-2" 2025-08-29 18:20:19.568812 | orchestrator | ], 2025-08-29 18:20:19.568822 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-08-29 18:20:19.568833 | orchestrator | "quorum_age": 1685, 2025-08-29 18:20:19.568843 | orchestrator | "features": { 2025-08-29 18:20:19.568853 | orchestrator | "quorum_con": "4540138322906710015", 2025-08-29 18:20:19.568863 | orchestrator | "quorum_mon": [ 2025-08-29 18:20:19.568873 | orchestrator | "kraken", 2025-08-29 18:20:19.568882 | orchestrator | "luminous", 2025-08-29 18:20:19.568892 | orchestrator | "mimic", 2025-08-29 18:20:19.568902 | orchestrator | "osdmap-prune", 2025-08-29 18:20:19.568912 | orchestrator | "nautilus", 2025-08-29 18:20:19.568921 | orchestrator | "octopus", 2025-08-29 18:20:19.568931 | orchestrator | "pacific", 2025-08-29 18:20:19.568941 | orchestrator | "elector-pinging", 2025-08-29 18:20:19.568951 | orchestrator | "quincy", 2025-08-29 18:20:19.568960 | orchestrator | "reef" 2025-08-29 18:20:19.568970 | orchestrator | ] 2025-08-29 18:20:19.568980 | orchestrator | }, 2025-08-29 18:20:19.568990 | orchestrator | "monmap": { 2025-08-29 18:20:19.569000 | orchestrator | "epoch": 1, 2025-08-29 18:20:19.569010 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-08-29 18:20:19.569021 | orchestrator | "modified": "2025-08-29T17:51:53.611717Z", 2025-08-29 18:20:19.569031 | orchestrator | "created": "2025-08-29T17:51:53.611717Z", 2025-08-29 18:20:19.569040 | orchestrator | "min_mon_release": 18, 2025-08-29 18:20:19.569050 | orchestrator | "min_mon_release_name": "reef", 2025-08-29 18:20:19.569060 | orchestrator | "election_strategy": 1, 2025-08-29 18:20:19.569070 | orchestrator | "disallowed_leaders: ": "", 2025-08-29 18:20:19.569080 | orchestrator | "stretch_mode": false, 2025-08-29 18:20:19.569090 | orchestrator | "tiebreaker_mon": "", 2025-08-29 18:20:19.569099 | orchestrator | "removed_ranks: ": "", 2025-08-29 18:20:19.569109 | orchestrator | "features": { 2025-08-29 18:20:19.569119 | orchestrator | "persistent": [ 2025-08-29 18:20:19.569128 | orchestrator | "kraken", 2025-08-29 18:20:19.569161 | orchestrator | "luminous", 2025-08-29 18:20:19.569170 | orchestrator | "mimic", 2025-08-29 18:20:19.569180 | orchestrator | "osdmap-prune", 2025-08-29 18:20:19.569189 | orchestrator | "nautilus", 2025-08-29 18:20:19.569198 | orchestrator | "octopus", 2025-08-29 18:20:19.569224 | orchestrator | "pacific", 2025-08-29 18:20:19.569234 | orchestrator | "elector-pinging", 2025-08-29 18:20:19.569262 | orchestrator | "quincy", 2025-08-29 18:20:19.569272 | orchestrator | "reef" 2025-08-29 18:20:19.569282 | orchestrator | ], 2025-08-29 18:20:19.569291 | orchestrator | "optional": [] 2025-08-29 18:20:19.569300 | orchestrator | }, 2025-08-29 18:20:19.569310 | orchestrator | "mons": [ 2025-08-29 18:20:19.569321 | orchestrator | { 2025-08-29 18:20:19.569332 | orchestrator | "rank": 0, 2025-08-29 18:20:19.569343 | orchestrator | "name": "testbed-node-0", 2025-08-29 18:20:19.569353 | orchestrator | "public_addrs": { 2025-08-29 18:20:19.569365 | orchestrator | "addrvec": [ 2025-08-29 18:20:19.569376 | orchestrator | { 2025-08-29 18:20:19.569387 | orchestrator | "type": "v2", 2025-08-29 18:20:19.569397 | orchestrator | "addr": "192.168.16.10:3300", 2025-08-29 18:20:19.569407 | orchestrator | "nonce": 0 2025-08-29 18:20:19.569418 | orchestrator | }, 2025-08-29 18:20:19.569429 | orchestrator | { 2025-08-29 18:20:19.569439 | orchestrator | "type": "v1", 2025-08-29 18:20:19.569450 | orchestrator | "addr": "192.168.16.10:6789", 2025-08-29 18:20:19.569461 | orchestrator | "nonce": 0 2025-08-29 18:20:19.569472 | orchestrator | } 2025-08-29 18:20:19.569483 | orchestrator | ] 2025-08-29 18:20:19.569493 | orchestrator | }, 2025-08-29 18:20:19.569504 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-08-29 18:20:19.569515 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-08-29 18:20:19.569526 | orchestrator | "priority": 0, 2025-08-29 18:20:19.569536 | orchestrator | "weight": 0, 2025-08-29 18:20:19.569547 | orchestrator | "crush_location": "{}" 2025-08-29 18:20:19.569557 | orchestrator | }, 2025-08-29 18:20:19.569568 | orchestrator | { 2025-08-29 18:20:19.569579 | orchestrator | "rank": 1, 2025-08-29 18:20:19.569589 | orchestrator | "name": "testbed-node-1", 2025-08-29 18:20:19.569600 | orchestrator | "public_addrs": { 2025-08-29 18:20:19.569610 | orchestrator | "addrvec": [ 2025-08-29 18:20:19.569620 | orchestrator | { 2025-08-29 18:20:19.569632 | orchestrator | "type": "v2", 2025-08-29 18:20:19.569642 | orchestrator | "addr": "192.168.16.11:3300", 2025-08-29 18:20:19.569653 | orchestrator | "nonce": 0 2025-08-29 18:20:19.569664 | orchestrator | }, 2025-08-29 18:20:19.569675 | orchestrator | { 2025-08-29 18:20:19.569684 | orchestrator | "type": "v1", 2025-08-29 18:20:19.569694 | orchestrator | "addr": "192.168.16.11:6789", 2025-08-29 18:20:19.569703 | orchestrator | "nonce": 0 2025-08-29 18:20:19.569712 | orchestrator | } 2025-08-29 18:20:19.569722 | orchestrator | ] 2025-08-29 18:20:19.569731 | orchestrator | }, 2025-08-29 18:20:19.569740 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-08-29 18:20:19.569750 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-08-29 18:20:19.569759 | orchestrator | "priority": 0, 2025-08-29 18:20:19.569769 | orchestrator | "weight": 0, 2025-08-29 18:20:19.569778 | orchestrator | "crush_location": "{}" 2025-08-29 18:20:19.569787 | orchestrator | }, 2025-08-29 18:20:19.569797 | orchestrator | { 2025-08-29 18:20:19.569806 | orchestrator | "rank": 2, 2025-08-29 18:20:19.569815 | orchestrator | "name": "testbed-node-2", 2025-08-29 18:20:19.569825 | orchestrator | "public_addrs": { 2025-08-29 18:20:19.569834 | orchestrator | "addrvec": [ 2025-08-29 18:20:19.569843 | orchestrator | { 2025-08-29 18:20:19.569853 | orchestrator | "type": "v2", 2025-08-29 18:20:19.569862 | orchestrator | "addr": "192.168.16.12:3300", 2025-08-29 18:20:19.569872 | orchestrator | "nonce": 0 2025-08-29 18:20:19.569881 | orchestrator | }, 2025-08-29 18:20:19.569891 | orchestrator | { 2025-08-29 18:20:19.569900 | orchestrator | "type": "v1", 2025-08-29 18:20:19.569909 | orchestrator | "addr": "192.168.16.12:6789", 2025-08-29 18:20:19.569919 | orchestrator | "nonce": 0 2025-08-29 18:20:19.569928 | orchestrator | } 2025-08-29 18:20:19.569938 | orchestrator | ] 2025-08-29 18:20:19.569947 | orchestrator | }, 2025-08-29 18:20:19.569957 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-08-29 18:20:19.569966 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-08-29 18:20:19.569975 | orchestrator | "priority": 0, 2025-08-29 18:20:19.569985 | orchestrator | "weight": 0, 2025-08-29 18:20:19.569994 | orchestrator | "crush_location": "{}" 2025-08-29 18:20:19.570003 | orchestrator | } 2025-08-29 18:20:19.570013 | orchestrator | ] 2025-08-29 18:20:19.570068 | orchestrator | } 2025-08-29 18:20:19.570078 | orchestrator | } 2025-08-29 18:20:19.570099 | orchestrator | 2025-08-29 18:20:19.570109 | orchestrator | # Ceph free space status 2025-08-29 18:20:19.570126 | orchestrator | 2025-08-29 18:20:19.570153 | orchestrator | + echo 2025-08-29 18:20:19.570163 | orchestrator | + echo '# Ceph free space status' 2025-08-29 18:20:19.570173 | orchestrator | + echo 2025-08-29 18:20:19.570182 | orchestrator | + ceph df 2025-08-29 18:20:20.167565 | orchestrator | --- RAW STORAGE --- 2025-08-29 18:20:20.167666 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-08-29 18:20:20.167692 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 18:20:20.167704 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-08-29 18:20:20.167715 | orchestrator | 2025-08-29 18:20:20.167727 | orchestrator | --- POOLS --- 2025-08-29 18:20:20.167739 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-08-29 18:20:20.167752 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-08-29 18:20:20.167763 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-08-29 18:20:20.167774 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-08-29 18:20:20.167786 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-08-29 18:20:20.167796 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-08-29 18:20:20.167807 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-08-29 18:20:20.167818 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-08-29 18:20:20.167828 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-08-29 18:20:20.167839 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2025-08-29 18:20:20.167850 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 18:20:20.167861 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 18:20:20.167871 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2025-08-29 18:20:20.167882 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 18:20:20.167893 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-08-29 18:20:20.212114 | orchestrator | ++ semver 9.2.0 5.0.0 2025-08-29 18:20:20.266253 | orchestrator | + [[ 1 -eq -1 ]] 2025-08-29 18:20:20.266285 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-08-29 18:20:20.266297 | orchestrator | + osism apply facts 2025-08-29 18:20:22.170695 | orchestrator | 2025-08-29 18:20:22 | INFO  | Task e4328158-3dca-4939-b696-d8e966b66773 (facts) was prepared for execution. 2025-08-29 18:20:22.170805 | orchestrator | 2025-08-29 18:20:22 | INFO  | It takes a moment until task e4328158-3dca-4939-b696-d8e966b66773 (facts) has been started and output is visible here. 2025-08-29 18:20:35.565968 | orchestrator | 2025-08-29 18:20:35.566140 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 18:20:35.566193 | orchestrator | 2025-08-29 18:20:35.566206 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 18:20:35.566217 | orchestrator | Friday 29 August 2025 18:20:26 +0000 (0:00:00.288) 0:00:00.288 ********* 2025-08-29 18:20:35.566228 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:20:35.566240 | orchestrator | ok: [testbed-manager] 2025-08-29 18:20:35.566251 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:20:35.566262 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:20:35.566272 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:20:35.566283 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:20:35.566293 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:20:35.566304 | orchestrator | 2025-08-29 18:20:35.566315 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 18:20:35.566325 | orchestrator | Friday 29 August 2025 18:20:28 +0000 (0:00:01.635) 0:00:01.924 ********* 2025-08-29 18:20:35.566336 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:20:35.566349 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:20:35.566359 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:20:35.566397 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:20:35.566408 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:20:35.566419 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:20:35.566430 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:20:35.566480 | orchestrator | 2025-08-29 18:20:35.566494 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 18:20:35.566506 | orchestrator | 2025-08-29 18:20:35.566518 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 18:20:35.566530 | orchestrator | Friday 29 August 2025 18:20:29 +0000 (0:00:01.384) 0:00:03.309 ********* 2025-08-29 18:20:35.566543 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:20:35.566555 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:20:35.566567 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:20:35.566580 | orchestrator | ok: [testbed-manager] 2025-08-29 18:20:35.566592 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:20:35.566604 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:20:35.566615 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:20:35.566628 | orchestrator | 2025-08-29 18:20:35.566640 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 18:20:35.566653 | orchestrator | 2025-08-29 18:20:35.566665 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 18:20:35.566677 | orchestrator | Friday 29 August 2025 18:20:34 +0000 (0:00:05.069) 0:00:08.378 ********* 2025-08-29 18:20:35.566689 | orchestrator | skipping: [testbed-manager] 2025-08-29 18:20:35.566701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:20:35.566713 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:20:35.566725 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:20:35.566738 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:20:35.566750 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:20:35.566762 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:20:35.566774 | orchestrator | 2025-08-29 18:20:35.566786 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:20:35.566799 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566813 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566825 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566838 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566850 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566862 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566873 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:20:35.566884 | orchestrator | 2025-08-29 18:20:35.566894 | orchestrator | 2025-08-29 18:20:35.566905 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:20:35.566933 | orchestrator | Friday 29 August 2025 18:20:35 +0000 (0:00:00.569) 0:00:08.947 ********* 2025-08-29 18:20:35.566944 | orchestrator | =============================================================================== 2025-08-29 18:20:35.566955 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.07s 2025-08-29 18:20:35.566965 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.64s 2025-08-29 18:20:35.566976 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2025-08-29 18:20:35.566986 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2025-08-29 18:20:35.866278 | orchestrator | + osism validate ceph-mons 2025-08-29 18:21:07.562293 | orchestrator | 2025-08-29 18:21:07.562418 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-08-29 18:21:07.562435 | orchestrator | 2025-08-29 18:21:07.562447 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 18:21:07.562459 | orchestrator | Friday 29 August 2025 18:20:52 +0000 (0:00:00.434) 0:00:00.434 ********* 2025-08-29 18:21:07.562470 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:07.562481 | orchestrator | 2025-08-29 18:21:07.562492 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 18:21:07.562503 | orchestrator | Friday 29 August 2025 18:20:52 +0000 (0:00:00.633) 0:00:01.067 ********* 2025-08-29 18:21:07.562514 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:07.562524 | orchestrator | 2025-08-29 18:21:07.562535 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 18:21:07.562546 | orchestrator | Friday 29 August 2025 18:20:53 +0000 (0:00:00.816) 0:00:01.883 ********* 2025-08-29 18:21:07.562557 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.562568 | orchestrator | 2025-08-29 18:21:07.562579 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 18:21:07.562608 | orchestrator | Friday 29 August 2025 18:20:53 +0000 (0:00:00.251) 0:00:02.134 ********* 2025-08-29 18:21:07.562619 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.562630 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:07.562641 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:07.562652 | orchestrator | 2025-08-29 18:21:07.562662 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 18:21:07.562673 | orchestrator | Friday 29 August 2025 18:20:54 +0000 (0:00:00.334) 0:00:02.469 ********* 2025-08-29 18:21:07.562684 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.562695 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:07.562705 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:07.562716 | orchestrator | 2025-08-29 18:21:07.562727 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 18:21:07.562737 | orchestrator | Friday 29 August 2025 18:20:55 +0000 (0:00:01.007) 0:00:03.477 ********* 2025-08-29 18:21:07.562748 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.562759 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:21:07.562770 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:21:07.562781 | orchestrator | 2025-08-29 18:21:07.562792 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 18:21:07.562803 | orchestrator | Friday 29 August 2025 18:20:55 +0000 (0:00:00.281) 0:00:03.758 ********* 2025-08-29 18:21:07.562816 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.562828 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:07.562841 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:07.562853 | orchestrator | 2025-08-29 18:21:07.562866 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:21:07.562877 | orchestrator | Friday 29 August 2025 18:20:56 +0000 (0:00:00.488) 0:00:04.246 ********* 2025-08-29 18:21:07.562888 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.562899 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:07.562910 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:07.562920 | orchestrator | 2025-08-29 18:21:07.562931 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-08-29 18:21:07.562942 | orchestrator | Friday 29 August 2025 18:20:56 +0000 (0:00:00.299) 0:00:04.546 ********* 2025-08-29 18:21:07.562953 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.562964 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:21:07.562975 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:21:07.562986 | orchestrator | 2025-08-29 18:21:07.562996 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-08-29 18:21:07.563007 | orchestrator | Friday 29 August 2025 18:20:56 +0000 (0:00:00.305) 0:00:04.851 ********* 2025-08-29 18:21:07.563039 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563050 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:07.563061 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:07.563072 | orchestrator | 2025-08-29 18:21:07.563083 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 18:21:07.563094 | orchestrator | Friday 29 August 2025 18:20:56 +0000 (0:00:00.298) 0:00:05.150 ********* 2025-08-29 18:21:07.563105 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563116 | orchestrator | 2025-08-29 18:21:07.563126 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 18:21:07.563137 | orchestrator | Friday 29 August 2025 18:20:57 +0000 (0:00:00.685) 0:00:05.836 ********* 2025-08-29 18:21:07.563148 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563159 | orchestrator | 2025-08-29 18:21:07.563169 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 18:21:07.563180 | orchestrator | Friday 29 August 2025 18:20:57 +0000 (0:00:00.264) 0:00:06.100 ********* 2025-08-29 18:21:07.563212 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563223 | orchestrator | 2025-08-29 18:21:07.563234 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:07.563244 | orchestrator | Friday 29 August 2025 18:20:58 +0000 (0:00:00.239) 0:00:06.340 ********* 2025-08-29 18:21:07.563255 | orchestrator | 2025-08-29 18:21:07.563265 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:07.563276 | orchestrator | Friday 29 August 2025 18:20:58 +0000 (0:00:00.068) 0:00:06.408 ********* 2025-08-29 18:21:07.563287 | orchestrator | 2025-08-29 18:21:07.563297 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:07.563308 | orchestrator | Friday 29 August 2025 18:20:58 +0000 (0:00:00.070) 0:00:06.479 ********* 2025-08-29 18:21:07.563319 | orchestrator | 2025-08-29 18:21:07.563330 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 18:21:07.563340 | orchestrator | Friday 29 August 2025 18:20:58 +0000 (0:00:00.081) 0:00:06.561 ********* 2025-08-29 18:21:07.563351 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563361 | orchestrator | 2025-08-29 18:21:07.563372 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 18:21:07.563382 | orchestrator | Friday 29 August 2025 18:20:58 +0000 (0:00:00.258) 0:00:06.819 ********* 2025-08-29 18:21:07.563393 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563404 | orchestrator | 2025-08-29 18:21:07.563432 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-08-29 18:21:07.563444 | orchestrator | Friday 29 August 2025 18:20:58 +0000 (0:00:00.259) 0:00:07.079 ********* 2025-08-29 18:21:07.563455 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563466 | orchestrator | 2025-08-29 18:21:07.563477 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-08-29 18:21:07.563487 | orchestrator | Friday 29 August 2025 18:20:59 +0000 (0:00:00.109) 0:00:07.188 ********* 2025-08-29 18:21:07.563498 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:21:07.563508 | orchestrator | 2025-08-29 18:21:07.563519 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-08-29 18:21:07.563530 | orchestrator | Friday 29 August 2025 18:21:00 +0000 (0:00:01.584) 0:00:08.773 ********* 2025-08-29 18:21:07.563540 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563551 | orchestrator | 2025-08-29 18:21:07.563562 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-08-29 18:21:07.563572 | orchestrator | Friday 29 August 2025 18:21:00 +0000 (0:00:00.310) 0:00:09.083 ********* 2025-08-29 18:21:07.563583 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563594 | orchestrator | 2025-08-29 18:21:07.563604 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-08-29 18:21:07.563621 | orchestrator | Friday 29 August 2025 18:21:01 +0000 (0:00:00.311) 0:00:09.394 ********* 2025-08-29 18:21:07.563639 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563650 | orchestrator | 2025-08-29 18:21:07.563661 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-08-29 18:21:07.563671 | orchestrator | Friday 29 August 2025 18:21:01 +0000 (0:00:00.341) 0:00:09.736 ********* 2025-08-29 18:21:07.563682 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563693 | orchestrator | 2025-08-29 18:21:07.563703 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-08-29 18:21:07.563714 | orchestrator | Friday 29 August 2025 18:21:01 +0000 (0:00:00.340) 0:00:10.076 ********* 2025-08-29 18:21:07.563724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.563735 | orchestrator | 2025-08-29 18:21:07.563746 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-08-29 18:21:07.563756 | orchestrator | Friday 29 August 2025 18:21:02 +0000 (0:00:00.112) 0:00:10.189 ********* 2025-08-29 18:21:07.563767 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563778 | orchestrator | 2025-08-29 18:21:07.563788 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-08-29 18:21:07.563799 | orchestrator | Friday 29 August 2025 18:21:02 +0000 (0:00:00.129) 0:00:10.319 ********* 2025-08-29 18:21:07.563809 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563820 | orchestrator | 2025-08-29 18:21:07.563831 | orchestrator | TASK [Gather status data] ****************************************************** 2025-08-29 18:21:07.563845 | orchestrator | Friday 29 August 2025 18:21:02 +0000 (0:00:00.111) 0:00:10.431 ********* 2025-08-29 18:21:07.563863 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:21:07.563881 | orchestrator | 2025-08-29 18:21:07.563900 | orchestrator | TASK [Set health test data] **************************************************** 2025-08-29 18:21:07.563920 | orchestrator | Friday 29 August 2025 18:21:03 +0000 (0:00:01.283) 0:00:11.715 ********* 2025-08-29 18:21:07.563939 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.563958 | orchestrator | 2025-08-29 18:21:07.563970 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-08-29 18:21:07.563981 | orchestrator | Friday 29 August 2025 18:21:03 +0000 (0:00:00.295) 0:00:12.011 ********* 2025-08-29 18:21:07.563991 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.564002 | orchestrator | 2025-08-29 18:21:07.564013 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-08-29 18:21:07.564023 | orchestrator | Friday 29 August 2025 18:21:03 +0000 (0:00:00.140) 0:00:12.151 ********* 2025-08-29 18:21:07.564034 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:07.564045 | orchestrator | 2025-08-29 18:21:07.564056 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-08-29 18:21:07.564067 | orchestrator | Friday 29 August 2025 18:21:04 +0000 (0:00:00.153) 0:00:12.304 ********* 2025-08-29 18:21:07.564077 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.564088 | orchestrator | 2025-08-29 18:21:07.564099 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-08-29 18:21:07.564109 | orchestrator | Friday 29 August 2025 18:21:04 +0000 (0:00:00.143) 0:00:12.448 ********* 2025-08-29 18:21:07.564120 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.564130 | orchestrator | 2025-08-29 18:21:07.564141 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 18:21:07.564152 | orchestrator | Friday 29 August 2025 18:21:04 +0000 (0:00:00.324) 0:00:12.773 ********* 2025-08-29 18:21:07.564162 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:07.564173 | orchestrator | 2025-08-29 18:21:07.564213 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 18:21:07.564225 | orchestrator | Friday 29 August 2025 18:21:04 +0000 (0:00:00.275) 0:00:13.048 ********* 2025-08-29 18:21:07.564236 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:07.564246 | orchestrator | 2025-08-29 18:21:07.564257 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 18:21:07.564267 | orchestrator | Friday 29 August 2025 18:21:05 +0000 (0:00:00.277) 0:00:13.325 ********* 2025-08-29 18:21:07.564286 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:07.564297 | orchestrator | 2025-08-29 18:21:07.564307 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 18:21:07.564318 | orchestrator | Friday 29 August 2025 18:21:06 +0000 (0:00:01.652) 0:00:14.978 ********* 2025-08-29 18:21:07.564328 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:07.564339 | orchestrator | 2025-08-29 18:21:07.564349 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 18:21:07.564360 | orchestrator | Friday 29 August 2025 18:21:07 +0000 (0:00:00.261) 0:00:15.239 ********* 2025-08-29 18:21:07.564371 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:07.564381 | orchestrator | 2025-08-29 18:21:07.564400 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:09.934258 | orchestrator | Friday 29 August 2025 18:21:07 +0000 (0:00:00.272) 0:00:15.512 ********* 2025-08-29 18:21:09.934373 | orchestrator | 2025-08-29 18:21:09.934390 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:09.934402 | orchestrator | Friday 29 August 2025 18:21:07 +0000 (0:00:00.069) 0:00:15.581 ********* 2025-08-29 18:21:09.934413 | orchestrator | 2025-08-29 18:21:09.934424 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:09.934435 | orchestrator | Friday 29 August 2025 18:21:07 +0000 (0:00:00.068) 0:00:15.649 ********* 2025-08-29 18:21:09.934450 | orchestrator | 2025-08-29 18:21:09.934460 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 18:21:09.934471 | orchestrator | Friday 29 August 2025 18:21:07 +0000 (0:00:00.071) 0:00:15.720 ********* 2025-08-29 18:21:09.934483 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:09.934493 | orchestrator | 2025-08-29 18:21:09.934504 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 18:21:09.934515 | orchestrator | Friday 29 August 2025 18:21:09 +0000 (0:00:01.495) 0:00:17.216 ********* 2025-08-29 18:21:09.934525 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 18:21:09.934536 | orchestrator |  "msg": [ 2025-08-29 18:21:09.934548 | orchestrator |  "Validator run completed.", 2025-08-29 18:21:09.934559 | orchestrator |  "You can find the report file here:", 2025-08-29 18:21:09.934570 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-08-29T18:20:52+00:00-report.json", 2025-08-29 18:21:09.934581 | orchestrator |  "on the following host:", 2025-08-29 18:21:09.934592 | orchestrator |  "testbed-manager" 2025-08-29 18:21:09.934603 | orchestrator |  ] 2025-08-29 18:21:09.934614 | orchestrator | } 2025-08-29 18:21:09.934625 | orchestrator | 2025-08-29 18:21:09.934636 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:21:09.934648 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 18:21:09.934682 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:21:09.934695 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:21:09.934706 | orchestrator | 2025-08-29 18:21:09.934717 | orchestrator | 2025-08-29 18:21:09.934727 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:21:09.934738 | orchestrator | Friday 29 August 2025 18:21:09 +0000 (0:00:00.583) 0:00:17.799 ********* 2025-08-29 18:21:09.934749 | orchestrator | =============================================================================== 2025-08-29 18:21:09.934762 | orchestrator | Aggregate test results step one ----------------------------------------- 1.65s 2025-08-29 18:21:09.934775 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2025-08-29 18:21:09.934810 | orchestrator | Write report file ------------------------------------------------------- 1.50s 2025-08-29 18:21:09.934823 | orchestrator | Gather status data ------------------------------------------------------ 1.28s 2025-08-29 18:21:09.934835 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-08-29 18:21:09.934848 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-08-29 18:21:09.934860 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-08-29 18:21:09.934872 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-08-29 18:21:09.934884 | orchestrator | Print report file information ------------------------------------------- 0.58s 2025-08-29 18:21:09.934896 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-08-29 18:21:09.934908 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-08-29 18:21:09.934921 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-08-29 18:21:09.934933 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2025-08-29 18:21:09.934945 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-08-29 18:21:09.934957 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-08-29 18:21:09.934970 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2025-08-29 18:21:09.934982 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-08-29 18:21:09.934995 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-08-29 18:21:09.935007 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-08-29 18:21:09.935019 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-08-29 18:21:10.236765 | orchestrator | + osism validate ceph-mgrs 2025-08-29 18:21:41.228166 | orchestrator | 2025-08-29 18:21:41.228293 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-08-29 18:21:41.228305 | orchestrator | 2025-08-29 18:21:41.228312 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 18:21:41.228319 | orchestrator | Friday 29 August 2025 18:21:26 +0000 (0:00:00.446) 0:00:00.446 ********* 2025-08-29 18:21:41.228327 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.228333 | orchestrator | 2025-08-29 18:21:41.228340 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 18:21:41.228346 | orchestrator | Friday 29 August 2025 18:21:27 +0000 (0:00:00.647) 0:00:01.093 ********* 2025-08-29 18:21:41.228352 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.228358 | orchestrator | 2025-08-29 18:21:41.228364 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 18:21:41.228370 | orchestrator | Friday 29 August 2025 18:21:28 +0000 (0:00:00.860) 0:00:01.954 ********* 2025-08-29 18:21:41.228377 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228384 | orchestrator | 2025-08-29 18:21:41.228390 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-08-29 18:21:41.228396 | orchestrator | Friday 29 August 2025 18:21:28 +0000 (0:00:00.255) 0:00:02.210 ********* 2025-08-29 18:21:41.228402 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228408 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:41.228414 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:41.228420 | orchestrator | 2025-08-29 18:21:41.228426 | orchestrator | TASK [Get container info] ****************************************************** 2025-08-29 18:21:41.228432 | orchestrator | Friday 29 August 2025 18:21:28 +0000 (0:00:00.298) 0:00:02.509 ********* 2025-08-29 18:21:41.228439 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228445 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:41.228451 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:41.228457 | orchestrator | 2025-08-29 18:21:41.228477 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-08-29 18:21:41.228502 | orchestrator | Friday 29 August 2025 18:21:29 +0000 (0:00:00.975) 0:00:03.484 ********* 2025-08-29 18:21:41.228509 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228515 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:21:41.228521 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:21:41.228528 | orchestrator | 2025-08-29 18:21:41.228534 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-08-29 18:21:41.228540 | orchestrator | Friday 29 August 2025 18:21:29 +0000 (0:00:00.324) 0:00:03.809 ********* 2025-08-29 18:21:41.228546 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228553 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:41.228559 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:41.228565 | orchestrator | 2025-08-29 18:21:41.228571 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:21:41.228577 | orchestrator | Friday 29 August 2025 18:21:30 +0000 (0:00:00.514) 0:00:04.323 ********* 2025-08-29 18:21:41.228583 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228589 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:41.228596 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:41.228602 | orchestrator | 2025-08-29 18:21:41.228608 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-08-29 18:21:41.228614 | orchestrator | Friday 29 August 2025 18:21:30 +0000 (0:00:00.319) 0:00:04.643 ********* 2025-08-29 18:21:41.228620 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228626 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:21:41.228633 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:21:41.228639 | orchestrator | 2025-08-29 18:21:41.228646 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-08-29 18:21:41.228653 | orchestrator | Friday 29 August 2025 18:21:31 +0000 (0:00:00.290) 0:00:04.933 ********* 2025-08-29 18:21:41.228660 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228667 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:21:41.228674 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:21:41.228681 | orchestrator | 2025-08-29 18:21:41.228688 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 18:21:41.228695 | orchestrator | Friday 29 August 2025 18:21:31 +0000 (0:00:00.317) 0:00:05.250 ********* 2025-08-29 18:21:41.228702 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228709 | orchestrator | 2025-08-29 18:21:41.228716 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 18:21:41.228722 | orchestrator | Friday 29 August 2025 18:21:31 +0000 (0:00:00.656) 0:00:05.907 ********* 2025-08-29 18:21:41.228730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228736 | orchestrator | 2025-08-29 18:21:41.228743 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 18:21:41.228750 | orchestrator | Friday 29 August 2025 18:21:32 +0000 (0:00:00.251) 0:00:06.158 ********* 2025-08-29 18:21:41.228757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228763 | orchestrator | 2025-08-29 18:21:41.228770 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:41.228777 | orchestrator | Friday 29 August 2025 18:21:32 +0000 (0:00:00.232) 0:00:06.391 ********* 2025-08-29 18:21:41.228784 | orchestrator | 2025-08-29 18:21:41.228791 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:41.228798 | orchestrator | Friday 29 August 2025 18:21:32 +0000 (0:00:00.067) 0:00:06.459 ********* 2025-08-29 18:21:41.228805 | orchestrator | 2025-08-29 18:21:41.228812 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:41.228819 | orchestrator | Friday 29 August 2025 18:21:32 +0000 (0:00:00.069) 0:00:06.529 ********* 2025-08-29 18:21:41.228825 | orchestrator | 2025-08-29 18:21:41.228832 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 18:21:41.228839 | orchestrator | Friday 29 August 2025 18:21:32 +0000 (0:00:00.072) 0:00:06.601 ********* 2025-08-29 18:21:41.228850 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228857 | orchestrator | 2025-08-29 18:21:41.228864 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-08-29 18:21:41.228871 | orchestrator | Friday 29 August 2025 18:21:32 +0000 (0:00:00.257) 0:00:06.859 ********* 2025-08-29 18:21:41.228878 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.228885 | orchestrator | 2025-08-29 18:21:41.228904 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-08-29 18:21:41.228912 | orchestrator | Friday 29 August 2025 18:21:33 +0000 (0:00:00.252) 0:00:07.111 ********* 2025-08-29 18:21:41.228919 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228926 | orchestrator | 2025-08-29 18:21:41.228933 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-08-29 18:21:41.228940 | orchestrator | Friday 29 August 2025 18:21:33 +0000 (0:00:00.114) 0:00:07.226 ********* 2025-08-29 18:21:41.228948 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:21:41.228954 | orchestrator | 2025-08-29 18:21:41.228961 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-08-29 18:21:41.228968 | orchestrator | Friday 29 August 2025 18:21:35 +0000 (0:00:01.921) 0:00:09.147 ********* 2025-08-29 18:21:41.228975 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.228982 | orchestrator | 2025-08-29 18:21:41.228989 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-08-29 18:21:41.228996 | orchestrator | Friday 29 August 2025 18:21:35 +0000 (0:00:00.260) 0:00:09.408 ********* 2025-08-29 18:21:41.229003 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.229009 | orchestrator | 2025-08-29 18:21:41.229015 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-08-29 18:21:41.229021 | orchestrator | Friday 29 August 2025 18:21:36 +0000 (0:00:00.763) 0:00:10.172 ********* 2025-08-29 18:21:41.229027 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.229033 | orchestrator | 2025-08-29 18:21:41.229039 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-08-29 18:21:41.229045 | orchestrator | Friday 29 August 2025 18:21:36 +0000 (0:00:00.137) 0:00:10.309 ********* 2025-08-29 18:21:41.229051 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:21:41.229057 | orchestrator | 2025-08-29 18:21:41.229063 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 18:21:41.229070 | orchestrator | Friday 29 August 2025 18:21:36 +0000 (0:00:00.158) 0:00:10.468 ********* 2025-08-29 18:21:41.229076 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.229082 | orchestrator | 2025-08-29 18:21:41.229088 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 18:21:41.229094 | orchestrator | Friday 29 August 2025 18:21:36 +0000 (0:00:00.251) 0:00:10.719 ********* 2025-08-29 18:21:41.229100 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:21:41.229106 | orchestrator | 2025-08-29 18:21:41.229113 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 18:21:41.229119 | orchestrator | Friday 29 August 2025 18:21:37 +0000 (0:00:00.285) 0:00:11.004 ********* 2025-08-29 18:21:41.229125 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.229131 | orchestrator | 2025-08-29 18:21:41.229137 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 18:21:41.229143 | orchestrator | Friday 29 August 2025 18:21:38 +0000 (0:00:01.247) 0:00:12.251 ********* 2025-08-29 18:21:41.229149 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.229155 | orchestrator | 2025-08-29 18:21:41.229161 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 18:21:41.229167 | orchestrator | Friday 29 August 2025 18:21:38 +0000 (0:00:00.263) 0:00:12.515 ********* 2025-08-29 18:21:41.229173 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.229179 | orchestrator | 2025-08-29 18:21:41.229185 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:41.229196 | orchestrator | Friday 29 August 2025 18:21:38 +0000 (0:00:00.254) 0:00:12.769 ********* 2025-08-29 18:21:41.229202 | orchestrator | 2025-08-29 18:21:41.229208 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:41.229214 | orchestrator | Friday 29 August 2025 18:21:38 +0000 (0:00:00.093) 0:00:12.862 ********* 2025-08-29 18:21:41.229233 | orchestrator | 2025-08-29 18:21:41.229240 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:21:41.229246 | orchestrator | Friday 29 August 2025 18:21:39 +0000 (0:00:00.068) 0:00:12.931 ********* 2025-08-29 18:21:41.229252 | orchestrator | 2025-08-29 18:21:41.229258 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 18:21:41.229264 | orchestrator | Friday 29 August 2025 18:21:39 +0000 (0:00:00.071) 0:00:13.002 ********* 2025-08-29 18:21:41.229270 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-08-29 18:21:41.229276 | orchestrator | 2025-08-29 18:21:41.229282 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 18:21:41.229288 | orchestrator | Friday 29 August 2025 18:21:40 +0000 (0:00:01.705) 0:00:14.708 ********* 2025-08-29 18:21:41.229294 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-08-29 18:21:41.229301 | orchestrator |  "msg": [ 2025-08-29 18:21:41.229307 | orchestrator |  "Validator run completed.", 2025-08-29 18:21:41.229313 | orchestrator |  "You can find the report file here:", 2025-08-29 18:21:41.229319 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-08-29T18:21:27+00:00-report.json", 2025-08-29 18:21:41.229326 | orchestrator |  "on the following host:", 2025-08-29 18:21:41.229332 | orchestrator |  "testbed-manager" 2025-08-29 18:21:41.229339 | orchestrator |  ] 2025-08-29 18:21:41.229345 | orchestrator | } 2025-08-29 18:21:41.229351 | orchestrator | 2025-08-29 18:21:41.229357 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:21:41.229364 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 18:21:41.229371 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:21:41.229382 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:21:41.547835 | orchestrator | 2025-08-29 18:21:41.547922 | orchestrator | 2025-08-29 18:21:41.547933 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:21:41.547944 | orchestrator | Friday 29 August 2025 18:21:41 +0000 (0:00:00.430) 0:00:15.138 ********* 2025-08-29 18:21:41.547954 | orchestrator | =============================================================================== 2025-08-29 18:21:41.547963 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.92s 2025-08-29 18:21:41.547972 | orchestrator | Write report file ------------------------------------------------------- 1.71s 2025-08-29 18:21:41.547980 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2025-08-29 18:21:41.547989 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-08-29 18:21:41.547998 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-08-29 18:21:41.548006 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.76s 2025-08-29 18:21:41.548015 | orchestrator | Aggregate test results step one ----------------------------------------- 0.66s 2025-08-29 18:21:41.548023 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-08-29 18:21:41.548032 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2025-08-29 18:21:41.548040 | orchestrator | Print report file information ------------------------------------------- 0.43s 2025-08-29 18:21:41.548048 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2025-08-29 18:21:41.548077 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-08-29 18:21:41.548105 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-08-29 18:21:41.548114 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-08-29 18:21:41.548123 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-08-29 18:21:41.548132 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2025-08-29 18:21:41.548140 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-08-29 18:21:41.548149 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-08-29 18:21:41.548157 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-08-29 18:21:41.548166 | orchestrator | Define report vars ------------------------------------------------------ 0.26s 2025-08-29 18:21:41.847311 | orchestrator | + osism validate ceph-osds 2025-08-29 18:22:02.449957 | orchestrator | 2025-08-29 18:22:02.450121 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-08-29 18:22:02.450140 | orchestrator | 2025-08-29 18:22:02.450152 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-08-29 18:22:02.450163 | orchestrator | Friday 29 August 2025 18:21:58 +0000 (0:00:00.417) 0:00:00.418 ********* 2025-08-29 18:22:02.450174 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:02.450186 | orchestrator | 2025-08-29 18:22:02.450197 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 18:22:02.450208 | orchestrator | Friday 29 August 2025 18:21:58 +0000 (0:00:00.637) 0:00:01.055 ********* 2025-08-29 18:22:02.450218 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:02.450229 | orchestrator | 2025-08-29 18:22:02.450240 | orchestrator | TASK [Create report output directory] ****************************************** 2025-08-29 18:22:02.450308 | orchestrator | Friday 29 August 2025 18:21:59 +0000 (0:00:00.227) 0:00:01.282 ********* 2025-08-29 18:22:02.450320 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:02.450331 | orchestrator | 2025-08-29 18:22:02.450342 | orchestrator | TASK [Define report vars] ****************************************************** 2025-08-29 18:22:02.450353 | orchestrator | Friday 29 August 2025 18:22:00 +0000 (0:00:01.005) 0:00:02.288 ********* 2025-08-29 18:22:02.450365 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:02.450377 | orchestrator | 2025-08-29 18:22:02.450388 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 18:22:02.450398 | orchestrator | Friday 29 August 2025 18:22:00 +0000 (0:00:00.122) 0:00:02.411 ********* 2025-08-29 18:22:02.450409 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:02.450420 | orchestrator | 2025-08-29 18:22:02.450431 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 18:22:02.450441 | orchestrator | Friday 29 August 2025 18:22:00 +0000 (0:00:00.143) 0:00:02.554 ********* 2025-08-29 18:22:02.450452 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:02.450463 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:02.450474 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:02.450485 | orchestrator | 2025-08-29 18:22:02.450497 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-08-29 18:22:02.450510 | orchestrator | Friday 29 August 2025 18:22:00 +0000 (0:00:00.318) 0:00:02.873 ********* 2025-08-29 18:22:02.450522 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:02.450534 | orchestrator | 2025-08-29 18:22:02.450547 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-08-29 18:22:02.450559 | orchestrator | Friday 29 August 2025 18:22:00 +0000 (0:00:00.151) 0:00:03.024 ********* 2025-08-29 18:22:02.450571 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:02.450583 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:02.450595 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:02.450631 | orchestrator | 2025-08-29 18:22:02.450644 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-08-29 18:22:02.450656 | orchestrator | Friday 29 August 2025 18:22:01 +0000 (0:00:00.314) 0:00:03.339 ********* 2025-08-29 18:22:02.450669 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:02.450681 | orchestrator | 2025-08-29 18:22:02.450693 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:22:02.450706 | orchestrator | Friday 29 August 2025 18:22:01 +0000 (0:00:00.529) 0:00:03.868 ********* 2025-08-29 18:22:02.450718 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:02.450731 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:02.450743 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:02.450755 | orchestrator | 2025-08-29 18:22:02.450767 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-08-29 18:22:02.450780 | orchestrator | Friday 29 August 2025 18:22:02 +0000 (0:00:00.462) 0:00:04.331 ********* 2025-08-29 18:22:02.450796 | orchestrator | skipping: [testbed-node-3] => (item={'id': '36c98d7db82fe66bdfef19922106bc9b4a6df82d402e585cb489e9d5084a81b8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-08-29 18:22:02.450813 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e2bda89391795cdf141014bc48dca1570bc11880dd5c38908964ea41779bb78b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 18:22:02.450826 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6d4fcb02a0cc0fd1589776f305ee919ef3ef7985698b9f6afa0f201f258f7d52', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 18:22:02.450857 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4fdc1f480a938091004d93daf59b64db3292aa7804ad34f14b3e76ca660ca903', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 18:22:02.450872 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3209842367c2b0d31d2adce53a08af7f81c0b937c8ebf1a6497ef19e5b993329', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 18:22:02.450902 | orchestrator | skipping: [testbed-node-3] => (item={'id': '064e312b87d45b19deae2b8cadb5bd0c1a766b498b39840454abbd80a506f7d8', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 18:22:02.450914 | orchestrator | skipping: [testbed-node-3] => (item={'id': '25fe2d7c65507d172007f22e901bb96318872ca00820f19ad70719ec1df44962', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 18:22:02.450982 | orchestrator | skipping: [testbed-node-3] => (item={'id': '565bcc6b94ca2d489155e75b3a8bda6babbd41ad5a968cea0f6f02d15096f8bd', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 18:22:02.450995 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9bdf3b970b3df44cf23bb6cbe495517e9f619fe051edb9aa6fb2e03cc59f004e', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-08-29 18:22:02.451006 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7f1dfe348f33d3350cf16d9d52caebef3714b8a3931930743f4e3b368dde967f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-08-29 18:22:02.451026 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4b03c0f711c52bf394d77fe23ee416aa8ec3d94b4a877231bae50daaa32660a2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 18:22:02.451038 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ce94d8336c7a63369220519e0d8d0be24ef18b874cec4292a08ceefaeffbb298', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 18:22:02.451052 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f7c97941124ded1b83a745fc3abb7a926ef59759deba55c57f608a01a815456f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 18:22:02.451063 | orchestrator | ok: [testbed-node-3] => (item={'id': '1360eb1640bc134a3f296e48aa7bc081150d45719a2f27f18df380cab990f001', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 18:22:02.451074 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8912daf52d53906af4c23817c91382a99d114fbead9986e0dd278894d13d839c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-08-29 18:22:02.451086 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f90160d0a63740fa05cf444bb5558682f7425aa6530619e7d352fe29dc7afb3', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 18:22:02.451098 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ef9b47a8b0319a0b25adf3ae8cfc56262910ba18d45e0a52b75d3b3442d4d01f', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 18:22:02.451109 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bb4be0f9f70f85aad860a98a747325c13d647c98339e91a04ec29fd6e98f3ca4', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 18:22:02.451125 | orchestrator | skipping: [testbed-node-3] => (item={'id': '86ff946d3245b74dc1ff06c97116d8605cc624099f9fb5374a7035cf809c46d9', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 18:22:02.451137 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e4071b671e670fb1448a6c4ce8a3f49df10d34eb9e8c466fb237d03703ad3ffe', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 18:22:02.451156 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f1f29c74b34cb6d4c1e917e1c1f481ca64b7b38fce2328e05862853d17007ca', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-08-29 18:22:02.645164 | orchestrator | skipping: [testbed-node-4] => (item={'id': '19539f67378aa0f284d300ebf2b4ce34dda117513747c15271957a2d91dedd8a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 18:22:02.645311 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c91875ccd07eb1a49d1b8f9f53e720771d9054d1f9aa3a42a79a31f732648b36', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 18:22:02.645328 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5cc5f34183bfeaf63772c28745ddef21c12275e6348e10d435808bafd85f482d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 18:22:02.645365 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a794fbd6f49f85326716f057b05d00807431a0b4caa5d170f2218607fd0f8cc', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 18:22:02.645377 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7252a72a749122c36af180227b2be92f317527350ad16a8389fa79aac453f819', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 18:22:02.645388 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2cd73d7486555efa9bbeb3fb13a11ddbfa9af6a80c60669376756bb388adf936', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 18:22:02.645401 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9655ec0bbaf3d1fd951b454938aa4dcb170de5afcbb54bfad181b98438293d97', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 18:22:02.645412 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0329d86ef718f3337b1febe35261085eb7d0664ddf00f1e5091b96bb93f0a197', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-08-29 18:22:02.645423 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ff13f1aba2311a13a74106ba7a150ac0f4c5ee4966fe4b8c865f5fa49ea994dd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-08-29 18:22:02.645435 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7f2960fe6a4dbbad794b9c1e679519bde92ccd620802e341133879fab862d3c7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 18:22:02.645447 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b20542287768bddb388619b9c6ff03a02b1886f680dbb03c23bd64664c824963', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 18:22:02.645459 | orchestrator | ok: [testbed-node-4] => (item={'id': '8aaf479927112ae2e4499e079956c72d084c0fe23fa6652091a5a7bcc9327e0b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 18:22:02.645472 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ca33742b8316d62219c8f6d376b60ce9f3ad05a6e68f5e696155f2e8ad1e335f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 18:22:02.645483 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5f7c92b411d379c17a10aafd08191eee7651f5310bd77c6afc83513ec530a366', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-08-29 18:22:02.645528 | orchestrator | skipping: [testbed-node-4] => (item={'id': '257cc3492f5a2a74b5dd5bd409449f8e99955e39a51dc54276e2fe6733cbb6f4', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 18:22:02.645542 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9d8ad0ebad89a3b5484d8c7e0a2c3917b23f95be49cab807a6d98cd04ed011f0', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 18:22:02.645561 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'edc0eaa06166e8ea414b6b932183df505a2a1783c3e1e67deea388b2f6b24612', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 18:22:02.645572 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7637a59191dac2ee9f6d00ffd1de13e58c4f5f6492ff1050d0cd0725f709057e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 18:22:02.645584 | orchestrator | skipping: [testbed-node-4] => (item={'id': '74d29aaefbd3ee324aea4bd54986058cc631a98bc7aaadcdf01ae4ada5abb2bd', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 18:22:02.645595 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3cac11289fe30675655223741ca4819308ca40e6ef7152971137da4a4a4fa3bd', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-08-29 18:22:02.645606 | orchestrator | skipping: [testbed-node-5] => (item={'id': '14399fa9bb6f5578bb514a13dc468f984414ac2bae54885b098d44fefd516256', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 18:22:02.645617 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea030250190491dae57d9367e2cb13488a12cb6972a44b7b63481238e2ea2e32', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-08-29 18:22:02.645628 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'be2d391c30dc710938462085bd3557a605017908b020c38b220c95f9b513d130', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-08-29 18:22:02.645639 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0b8783658d3319a473366f6a4a2827630221640404db35c6131850db846aaf46', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 18:22:02.645650 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6706ab590415b1f270f72dd264ce16387733bc9dc855fa314dcf7ff57b9c4f0e', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-08-29 18:22:02.645661 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04d2c630a896fd4da4a8c32a5994c5f04e52a6c0c30177df10d4bad7b7e893f9', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 18:22:02.645677 | orchestrator | skipping: [testbed-node-5] => (item={'id': '64a0712968e3cd56076f8a5ac76dae456419d0b42258da1ef112e6bf8eef3ac2', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-08-29 18:22:02.645689 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f0e017126eab39ca99f6ab5c7a1e4bc2659bbc6ed00d1aa09d22f410b186cbf1', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-08-29 18:22:02.645701 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c9c692a160e3c019152109dcd15ce8ea56c404ea7af1d23a648a3e04003d8550', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-08-29 18:22:02.645728 | orchestrator | skipping: [testbed-node-5] => (item={'id': '683f957cd57a3733ef846d46eab061eda573c8c0e4749f364a28c1c3baa0e1d9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-08-29 18:22:10.277577 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd08e636896326f46a8c373655f9727b005510a16c1b939e80a7bc224c6c23167', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-08-29 18:22:10.277695 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd76cd0da0046e187603e687cf9e275765c960673d499a1cb8d22e719ab09bcbb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 18:22:10.277713 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f24d9a051cdcad37ba4099209aca328fd076e6cb32564f632bab145734be5815', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-08-29 18:22:10.277725 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3b3ebf4fe1ac76cae8a5ca696e0126017ffafdb1ceeec4e8916987b6df2cfcf5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-08-29 18:22:10.277738 | orchestrator | skipping: [testbed-node-5] => (item={'id': '691bb26492f7bb6ad9eddf29f5bd77ad041f1a20e07808e3e274be671449c0ec', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 18:22:10.277752 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0ede636f399ee7ffbcc0bba2b397e39237bf8e106f8d0445d1908e656d2d5f24', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-08-29 18:22:10.277762 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a904570b78e1e869dfb76fc36a48109bdccc5481370930ce7af89200b2695465', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-08-29 18:22:10.277773 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f71d4a34ecf263937dbf1d97b803dae46ccbeb237c501e00f2550de4f41d6153', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 18:22:10.277784 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aea1e916a997f2b839ed555ac1e5dce1ed68dea161f22398ee5eb8367e0c3211', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-08-29 18:22:10.277795 | orchestrator | 2025-08-29 18:22:10.277809 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-08-29 18:22:10.277820 | orchestrator | Friday 29 August 2025 18:22:02 +0000 (0:00:00.538) 0:00:04.869 ********* 2025-08-29 18:22:10.277831 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.277843 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:10.277853 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:10.277864 | orchestrator | 2025-08-29 18:22:10.277875 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-08-29 18:22:10.277885 | orchestrator | Friday 29 August 2025 18:22:03 +0000 (0:00:00.302) 0:00:05.171 ********* 2025-08-29 18:22:10.277896 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.277907 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:10.277918 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:10.277928 | orchestrator | 2025-08-29 18:22:10.277939 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-08-29 18:22:10.277949 | orchestrator | Friday 29 August 2025 18:22:03 +0000 (0:00:00.290) 0:00:05.461 ********* 2025-08-29 18:22:10.277983 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.277994 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:10.278077 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:10.278091 | orchestrator | 2025-08-29 18:22:10.278103 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:22:10.278116 | orchestrator | Friday 29 August 2025 18:22:03 +0000 (0:00:00.489) 0:00:05.951 ********* 2025-08-29 18:22:10.278128 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.278140 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:10.278152 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:10.278164 | orchestrator | 2025-08-29 18:22:10.278176 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-08-29 18:22:10.278188 | orchestrator | Friday 29 August 2025 18:22:04 +0000 (0:00:00.312) 0:00:06.263 ********* 2025-08-29 18:22:10.278200 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-08-29 18:22:10.278214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-08-29 18:22:10.278227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-08-29 18:22:10.278276 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-08-29 18:22:10.278308 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:10.278321 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-08-29 18:22:10.278333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-08-29 18:22:10.278345 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:10.278357 | orchestrator | 2025-08-29 18:22:10.278369 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-08-29 18:22:10.278382 | orchestrator | Friday 29 August 2025 18:22:04 +0000 (0:00:00.319) 0:00:06.583 ********* 2025-08-29 18:22:10.278394 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.278406 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:10.278419 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:10.278431 | orchestrator | 2025-08-29 18:22:10.278443 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 18:22:10.278455 | orchestrator | Friday 29 August 2025 18:22:04 +0000 (0:00:00.296) 0:00:06.879 ********* 2025-08-29 18:22:10.278465 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278476 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:10.278487 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:10.278497 | orchestrator | 2025-08-29 18:22:10.278508 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-08-29 18:22:10.278518 | orchestrator | Friday 29 August 2025 18:22:05 +0000 (0:00:00.503) 0:00:07.383 ********* 2025-08-29 18:22:10.278529 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278539 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:10.278550 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:10.278560 | orchestrator | 2025-08-29 18:22:10.278571 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-08-29 18:22:10.278582 | orchestrator | Friday 29 August 2025 18:22:05 +0000 (0:00:00.364) 0:00:07.747 ********* 2025-08-29 18:22:10.278593 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.278603 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:10.278614 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:10.278624 | orchestrator | 2025-08-29 18:22:10.278635 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 18:22:10.278646 | orchestrator | Friday 29 August 2025 18:22:05 +0000 (0:00:00.347) 0:00:08.095 ********* 2025-08-29 18:22:10.278656 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278667 | orchestrator | 2025-08-29 18:22:10.278678 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 18:22:10.278697 | orchestrator | Friday 29 August 2025 18:22:06 +0000 (0:00:00.262) 0:00:08.357 ********* 2025-08-29 18:22:10.278707 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278718 | orchestrator | 2025-08-29 18:22:10.278729 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 18:22:10.278739 | orchestrator | Friday 29 August 2025 18:22:06 +0000 (0:00:00.248) 0:00:08.606 ********* 2025-08-29 18:22:10.278750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278761 | orchestrator | 2025-08-29 18:22:10.278772 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:22:10.278783 | orchestrator | Friday 29 August 2025 18:22:06 +0000 (0:00:00.254) 0:00:08.861 ********* 2025-08-29 18:22:10.278794 | orchestrator | 2025-08-29 18:22:10.278804 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:22:10.278815 | orchestrator | Friday 29 August 2025 18:22:06 +0000 (0:00:00.065) 0:00:08.927 ********* 2025-08-29 18:22:10.278826 | orchestrator | 2025-08-29 18:22:10.278836 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:22:10.278847 | orchestrator | Friday 29 August 2025 18:22:06 +0000 (0:00:00.061) 0:00:08.989 ********* 2025-08-29 18:22:10.278857 | orchestrator | 2025-08-29 18:22:10.278868 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 18:22:10.278879 | orchestrator | Friday 29 August 2025 18:22:07 +0000 (0:00:00.224) 0:00:09.213 ********* 2025-08-29 18:22:10.278890 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278900 | orchestrator | 2025-08-29 18:22:10.278911 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-08-29 18:22:10.278921 | orchestrator | Friday 29 August 2025 18:22:07 +0000 (0:00:00.246) 0:00:09.459 ********* 2025-08-29 18:22:10.278932 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:10.278942 | orchestrator | 2025-08-29 18:22:10.278953 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:22:10.278964 | orchestrator | Friday 29 August 2025 18:22:07 +0000 (0:00:00.268) 0:00:09.728 ********* 2025-08-29 18:22:10.278974 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.278985 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:10.278995 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:10.279006 | orchestrator | 2025-08-29 18:22:10.279017 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-08-29 18:22:10.279028 | orchestrator | Friday 29 August 2025 18:22:07 +0000 (0:00:00.293) 0:00:10.021 ********* 2025-08-29 18:22:10.279039 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.279049 | orchestrator | 2025-08-29 18:22:10.279060 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-08-29 18:22:10.279071 | orchestrator | Friday 29 August 2025 18:22:08 +0000 (0:00:00.246) 0:00:10.268 ********* 2025-08-29 18:22:10.279082 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 18:22:10.279092 | orchestrator | 2025-08-29 18:22:10.279103 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-08-29 18:22:10.279114 | orchestrator | Friday 29 August 2025 18:22:09 +0000 (0:00:01.611) 0:00:11.880 ********* 2025-08-29 18:22:10.279125 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.279136 | orchestrator | 2025-08-29 18:22:10.279146 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-08-29 18:22:10.279157 | orchestrator | Friday 29 August 2025 18:22:09 +0000 (0:00:00.134) 0:00:12.015 ********* 2025-08-29 18:22:10.279167 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:10.279178 | orchestrator | 2025-08-29 18:22:10.279189 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-08-29 18:22:10.279200 | orchestrator | Friday 29 August 2025 18:22:10 +0000 (0:00:00.301) 0:00:12.316 ********* 2025-08-29 18:22:10.279216 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:22.964023 | orchestrator | 2025-08-29 18:22:22.964143 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-08-29 18:22:22.964160 | orchestrator | Friday 29 August 2025 18:22:10 +0000 (0:00:00.112) 0:00:12.429 ********* 2025-08-29 18:22:22.964196 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964209 | orchestrator | 2025-08-29 18:22:22.964220 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:22:22.964231 | orchestrator | Friday 29 August 2025 18:22:10 +0000 (0:00:00.125) 0:00:12.554 ********* 2025-08-29 18:22:22.964242 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964253 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.964320 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.964333 | orchestrator | 2025-08-29 18:22:22.964345 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-08-29 18:22:22.964356 | orchestrator | Friday 29 August 2025 18:22:10 +0000 (0:00:00.501) 0:00:13.055 ********* 2025-08-29 18:22:22.964367 | orchestrator | changed: [testbed-node-3] 2025-08-29 18:22:22.964379 | orchestrator | changed: [testbed-node-4] 2025-08-29 18:22:22.964390 | orchestrator | changed: [testbed-node-5] 2025-08-29 18:22:22.964400 | orchestrator | 2025-08-29 18:22:22.964411 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-08-29 18:22:22.964422 | orchestrator | Friday 29 August 2025 18:22:13 +0000 (0:00:02.194) 0:00:15.249 ********* 2025-08-29 18:22:22.964433 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964444 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.964455 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.964465 | orchestrator | 2025-08-29 18:22:22.964476 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-08-29 18:22:22.964486 | orchestrator | Friday 29 August 2025 18:22:13 +0000 (0:00:00.309) 0:00:15.559 ********* 2025-08-29 18:22:22.964497 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964508 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.964518 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.964529 | orchestrator | 2025-08-29 18:22:22.964540 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-08-29 18:22:22.964551 | orchestrator | Friday 29 August 2025 18:22:13 +0000 (0:00:00.482) 0:00:16.041 ********* 2025-08-29 18:22:22.964563 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:22.964576 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:22.964589 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:22.964601 | orchestrator | 2025-08-29 18:22:22.964613 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-08-29 18:22:22.964625 | orchestrator | Friday 29 August 2025 18:22:14 +0000 (0:00:00.482) 0:00:16.524 ********* 2025-08-29 18:22:22.964638 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964650 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.964662 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.964675 | orchestrator | 2025-08-29 18:22:22.964688 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-08-29 18:22:22.964700 | orchestrator | Friday 29 August 2025 18:22:14 +0000 (0:00:00.306) 0:00:16.830 ********* 2025-08-29 18:22:22.964712 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:22.964725 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:22.964736 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:22.964748 | orchestrator | 2025-08-29 18:22:22.964761 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-08-29 18:22:22.964822 | orchestrator | Friday 29 August 2025 18:22:14 +0000 (0:00:00.290) 0:00:17.121 ********* 2025-08-29 18:22:22.964836 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:22.964848 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:22.964860 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:22.964872 | orchestrator | 2025-08-29 18:22:22.964884 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-08-29 18:22:22.964897 | orchestrator | Friday 29 August 2025 18:22:15 +0000 (0:00:00.318) 0:00:17.440 ********* 2025-08-29 18:22:22.964909 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964920 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.964931 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.964950 | orchestrator | 2025-08-29 18:22:22.964961 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-08-29 18:22:22.964972 | orchestrator | Friday 29 August 2025 18:22:16 +0000 (0:00:00.753) 0:00:18.193 ********* 2025-08-29 18:22:22.964982 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.964993 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.965003 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.965014 | orchestrator | 2025-08-29 18:22:22.965024 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-08-29 18:22:22.965035 | orchestrator | Friday 29 August 2025 18:22:16 +0000 (0:00:00.497) 0:00:18.691 ********* 2025-08-29 18:22:22.965045 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.965056 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.965066 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.965076 | orchestrator | 2025-08-29 18:22:22.965092 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-08-29 18:22:22.965103 | orchestrator | Friday 29 August 2025 18:22:16 +0000 (0:00:00.314) 0:00:19.006 ********* 2025-08-29 18:22:22.965114 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:22.965124 | orchestrator | skipping: [testbed-node-4] 2025-08-29 18:22:22.965135 | orchestrator | skipping: [testbed-node-5] 2025-08-29 18:22:22.965145 | orchestrator | 2025-08-29 18:22:22.965156 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-08-29 18:22:22.965167 | orchestrator | Friday 29 August 2025 18:22:17 +0000 (0:00:00.312) 0:00:19.319 ********* 2025-08-29 18:22:22.965177 | orchestrator | ok: [testbed-node-3] 2025-08-29 18:22:22.965188 | orchestrator | ok: [testbed-node-4] 2025-08-29 18:22:22.965199 | orchestrator | ok: [testbed-node-5] 2025-08-29 18:22:22.965209 | orchestrator | 2025-08-29 18:22:22.965220 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-08-29 18:22:22.965231 | orchestrator | Friday 29 August 2025 18:22:17 +0000 (0:00:00.511) 0:00:19.831 ********* 2025-08-29 18:22:22.965241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:22.965253 | orchestrator | 2025-08-29 18:22:22.965280 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-08-29 18:22:22.965292 | orchestrator | Friday 29 August 2025 18:22:17 +0000 (0:00:00.265) 0:00:20.096 ********* 2025-08-29 18:22:22.965303 | orchestrator | skipping: [testbed-node-3] 2025-08-29 18:22:22.965314 | orchestrator | 2025-08-29 18:22:22.965342 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-08-29 18:22:22.965354 | orchestrator | Friday 29 August 2025 18:22:18 +0000 (0:00:00.241) 0:00:20.338 ********* 2025-08-29 18:22:22.965365 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:22.965375 | orchestrator | 2025-08-29 18:22:22.965386 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-08-29 18:22:22.965396 | orchestrator | Friday 29 August 2025 18:22:19 +0000 (0:00:01.621) 0:00:21.960 ********* 2025-08-29 18:22:22.965407 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:22.965418 | orchestrator | 2025-08-29 18:22:22.965428 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-08-29 18:22:22.965439 | orchestrator | Friday 29 August 2025 18:22:20 +0000 (0:00:00.267) 0:00:22.228 ********* 2025-08-29 18:22:22.965449 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:22.965460 | orchestrator | 2025-08-29 18:22:22.965470 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:22:22.965481 | orchestrator | Friday 29 August 2025 18:22:20 +0000 (0:00:00.269) 0:00:22.497 ********* 2025-08-29 18:22:22.965491 | orchestrator | 2025-08-29 18:22:22.965502 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:22:22.965513 | orchestrator | Friday 29 August 2025 18:22:20 +0000 (0:00:00.072) 0:00:22.569 ********* 2025-08-29 18:22:22.965523 | orchestrator | 2025-08-29 18:22:22.965534 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-08-29 18:22:22.965553 | orchestrator | Friday 29 August 2025 18:22:20 +0000 (0:00:00.069) 0:00:22.639 ********* 2025-08-29 18:22:22.965563 | orchestrator | 2025-08-29 18:22:22.965574 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-08-29 18:22:22.965585 | orchestrator | Friday 29 August 2025 18:22:20 +0000 (0:00:00.069) 0:00:22.709 ********* 2025-08-29 18:22:22.965595 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 18:22:22.965606 | orchestrator | 2025-08-29 18:22:22.965616 | orchestrator | TASK [Print report file information] ******************************************* 2025-08-29 18:22:22.965627 | orchestrator | Friday 29 August 2025 18:22:22 +0000 (0:00:01.572) 0:00:24.281 ********* 2025-08-29 18:22:22.965637 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-08-29 18:22:22.965648 | orchestrator |  "msg": [ 2025-08-29 18:22:22.965659 | orchestrator |  "Validator run completed.", 2025-08-29 18:22:22.965670 | orchestrator |  "You can find the report file here:", 2025-08-29 18:22:22.965681 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-08-29T18:21:58+00:00-report.json", 2025-08-29 18:22:22.965692 | orchestrator |  "on the following host:", 2025-08-29 18:22:22.965703 | orchestrator |  "testbed-manager" 2025-08-29 18:22:22.965714 | orchestrator |  ] 2025-08-29 18:22:22.965725 | orchestrator | } 2025-08-29 18:22:22.965736 | orchestrator | 2025-08-29 18:22:22.965747 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:22:22.965759 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-08-29 18:22:22.965770 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 18:22:22.965781 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 18:22:22.965792 | orchestrator | 2025-08-29 18:22:22.965802 | orchestrator | 2025-08-29 18:22:22.965813 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:22:22.965823 | orchestrator | Friday 29 August 2025 18:22:22 +0000 (0:00:00.804) 0:00:25.086 ********* 2025-08-29 18:22:22.965834 | orchestrator | =============================================================================== 2025-08-29 18:22:22.965844 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.19s 2025-08-29 18:22:22.965855 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-08-29 18:22:22.965865 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2025-08-29 18:22:22.965875 | orchestrator | Write report file ------------------------------------------------------- 1.57s 2025-08-29 18:22:22.965891 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2025-08-29 18:22:22.965902 | orchestrator | Print report file information ------------------------------------------- 0.80s 2025-08-29 18:22:22.965913 | orchestrator | Prepare test data ------------------------------------------------------- 0.75s 2025-08-29 18:22:22.965923 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-08-29 18:22:22.965933 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2025-08-29 18:22:22.965944 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.53s 2025-08-29 18:22:22.965954 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.51s 2025-08-29 18:22:22.965965 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.50s 2025-08-29 18:22:22.965975 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-08-29 18:22:22.965986 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2025-08-29 18:22:22.965996 | orchestrator | Set test result to passed if count matches ------------------------------ 0.49s 2025-08-29 18:22:22.966013 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-08-29 18:22:22.966092 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.48s 2025-08-29 18:22:23.259844 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-08-29 18:22:23.259936 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.36s 2025-08-29 18:22:23.259949 | orchestrator | Flush handlers ---------------------------------------------------------- 0.35s 2025-08-29 18:22:23.568370 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-08-29 18:22:23.573481 | orchestrator | + set -e 2025-08-29 18:22:23.573513 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 18:22:23.573526 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 18:22:23.573537 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 18:22:23.573635 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 18:22:23.573647 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 18:22:23.573658 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 18:22:23.573670 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 18:22:23.573681 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 18:22:23.573692 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 18:22:23.573703 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 18:22:23.573713 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 18:22:23.573724 | orchestrator | ++ export ARA=false 2025-08-29 18:22:23.573735 | orchestrator | ++ ARA=false 2025-08-29 18:22:23.573746 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 18:22:23.573756 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 18:22:23.573767 | orchestrator | ++ export TEMPEST=false 2025-08-29 18:22:23.573777 | orchestrator | ++ TEMPEST=false 2025-08-29 18:22:23.573788 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 18:22:23.573799 | orchestrator | ++ IS_ZUUL=true 2025-08-29 18:22:23.573810 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 18:22:23.573821 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.57 2025-08-29 18:22:23.573831 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 18:22:23.573842 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 18:22:23.573852 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 18:22:23.573863 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 18:22:23.573873 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 18:22:23.573884 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 18:22:23.573894 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 18:22:23.573905 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 18:22:23.573922 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 18:22:23.573933 | orchestrator | + source /etc/os-release 2025-08-29 18:22:23.573943 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-08-29 18:22:23.573954 | orchestrator | ++ NAME=Ubuntu 2025-08-29 18:22:23.573964 | orchestrator | ++ VERSION_ID=24.04 2025-08-29 18:22:23.573974 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-08-29 18:22:23.573985 | orchestrator | ++ VERSION_CODENAME=noble 2025-08-29 18:22:23.573995 | orchestrator | ++ ID=ubuntu 2025-08-29 18:22:23.574006 | orchestrator | ++ ID_LIKE=debian 2025-08-29 18:22:23.574065 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-08-29 18:22:23.574079 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-08-29 18:22:23.574090 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-08-29 18:22:23.574101 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-08-29 18:22:23.574113 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-08-29 18:22:23.574123 | orchestrator | ++ LOGO=ubuntu-logo 2025-08-29 18:22:23.574134 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-08-29 18:22:23.574145 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-08-29 18:22:23.574157 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 18:22:23.608438 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-08-29 18:22:48.120765 | orchestrator | 2025-08-29 18:22:48.120876 | orchestrator | # Status of Elasticsearch 2025-08-29 18:22:48.120894 | orchestrator | 2025-08-29 18:22:48.120907 | orchestrator | + pushd /opt/configuration/contrib 2025-08-29 18:22:48.120920 | orchestrator | + echo 2025-08-29 18:22:48.120931 | orchestrator | + echo '# Status of Elasticsearch' 2025-08-29 18:22:48.120942 | orchestrator | + echo 2025-08-29 18:22:48.120953 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-08-29 18:22:48.324209 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-08-29 18:22:48.325056 | orchestrator | 2025-08-29 18:22:48.325087 | orchestrator | # Status of MariaDB 2025-08-29 18:22:48.325102 | orchestrator | 2025-08-29 18:22:48.325115 | orchestrator | + echo 2025-08-29 18:22:48.325126 | orchestrator | + echo '# Status of MariaDB' 2025-08-29 18:22:48.325137 | orchestrator | + echo 2025-08-29 18:22:48.325147 | orchestrator | + MARIADB_USER=root_shard_0 2025-08-29 18:22:48.325159 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-08-29 18:22:48.400592 | orchestrator | Reading package lists... 2025-08-29 18:22:48.787896 | orchestrator | Building dependency tree... 2025-08-29 18:22:48.788969 | orchestrator | Reading state information... 2025-08-29 18:22:49.249102 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-08-29 18:22:49.249201 | orchestrator | bc set to manually installed. 2025-08-29 18:22:49.249215 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2025-08-29 18:22:49.931151 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-08-29 18:22:49.931659 | orchestrator | 2025-08-29 18:22:49.931694 | orchestrator | # Status of Prometheus 2025-08-29 18:22:49.931707 | orchestrator | 2025-08-29 18:22:49.931718 | orchestrator | + echo 2025-08-29 18:22:49.931729 | orchestrator | + echo '# Status of Prometheus' 2025-08-29 18:22:49.931740 | orchestrator | + echo 2025-08-29 18:22:49.931751 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-08-29 18:22:49.999907 | orchestrator | Unauthorized 2025-08-29 18:22:50.007134 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-08-29 18:22:50.059984 | orchestrator | Unauthorized 2025-08-29 18:22:50.065054 | orchestrator | 2025-08-29 18:22:50.065080 | orchestrator | # Status of RabbitMQ 2025-08-29 18:22:50.065091 | orchestrator | 2025-08-29 18:22:50.065103 | orchestrator | + echo 2025-08-29 18:22:50.065114 | orchestrator | + echo '# Status of RabbitMQ' 2025-08-29 18:22:50.065125 | orchestrator | + echo 2025-08-29 18:22:50.065137 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-08-29 18:22:50.565397 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-08-29 18:22:50.577868 | orchestrator | 2025-08-29 18:22:50.577910 | orchestrator | # Status of Redis 2025-08-29 18:22:50.577921 | orchestrator | 2025-08-29 18:22:50.577932 | orchestrator | + echo 2025-08-29 18:22:50.577943 | orchestrator | + echo '# Status of Redis' 2025-08-29 18:22:50.577955 | orchestrator | + echo 2025-08-29 18:22:50.577968 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-08-29 18:22:50.586210 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002596s;;;0.000000;10.000000 2025-08-29 18:22:50.586994 | orchestrator | 2025-08-29 18:22:50.587022 | orchestrator | # Create backup of MariaDB database 2025-08-29 18:22:50.587035 | orchestrator | 2025-08-29 18:22:50.587048 | orchestrator | + popd 2025-08-29 18:22:50.587061 | orchestrator | + echo 2025-08-29 18:22:50.587073 | orchestrator | + echo '# Create backup of MariaDB database' 2025-08-29 18:22:50.587086 | orchestrator | + echo 2025-08-29 18:22:50.587099 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-08-29 18:22:52.516936 | orchestrator | 2025-08-29 18:22:52 | INFO  | Task 002a9393-f2ef-45e7-87a3-3f31106babbb (mariadb_backup) was prepared for execution. 2025-08-29 18:22:52.517056 | orchestrator | 2025-08-29 18:22:52 | INFO  | It takes a moment until task 002a9393-f2ef-45e7-87a3-3f31106babbb (mariadb_backup) has been started and output is visible here. 2025-08-29 18:23:21.010974 | orchestrator | 2025-08-29 18:23:21.011098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 18:23:21.011114 | orchestrator | 2025-08-29 18:23:21.011127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 18:23:21.011139 | orchestrator | Friday 29 August 2025 18:22:56 +0000 (0:00:00.198) 0:00:00.198 ********* 2025-08-29 18:23:21.011176 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:23:21.011189 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:23:21.011199 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:23:21.011210 | orchestrator | 2025-08-29 18:23:21.011221 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 18:23:21.011232 | orchestrator | Friday 29 August 2025 18:22:56 +0000 (0:00:00.353) 0:00:00.551 ********* 2025-08-29 18:23:21.011243 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 18:23:21.011255 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 18:23:21.011265 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 18:23:21.011276 | orchestrator | 2025-08-29 18:23:21.011286 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 18:23:21.011297 | orchestrator | 2025-08-29 18:23:21.011308 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 18:23:21.011319 | orchestrator | Friday 29 August 2025 18:22:57 +0000 (0:00:00.582) 0:00:01.133 ********* 2025-08-29 18:23:21.011383 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 18:23:21.011396 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 18:23:21.011407 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 18:23:21.011418 | orchestrator | 2025-08-29 18:23:21.011429 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 18:23:21.011439 | orchestrator | Friday 29 August 2025 18:22:57 +0000 (0:00:00.421) 0:00:01.555 ********* 2025-08-29 18:23:21.011451 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 18:23:21.011464 | orchestrator | 2025-08-29 18:23:21.011475 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-08-29 18:23:21.011485 | orchestrator | Friday 29 August 2025 18:22:58 +0000 (0:00:00.575) 0:00:02.131 ********* 2025-08-29 18:23:21.011496 | orchestrator | ok: [testbed-node-2] 2025-08-29 18:23:21.011509 | orchestrator | ok: [testbed-node-1] 2025-08-29 18:23:21.011522 | orchestrator | ok: [testbed-node-0] 2025-08-29 18:23:21.011534 | orchestrator | 2025-08-29 18:23:21.011547 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-08-29 18:23:21.011559 | orchestrator | Friday 29 August 2025 18:23:01 +0000 (0:00:03.213) 0:00:05.344 ********* 2025-08-29 18:23:21.011572 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 18:23:21.011585 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-08-29 18:23:21.011598 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 18:23:21.011611 | orchestrator | mariadb_bootstrap_restart 2025-08-29 18:23:21.011624 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:23:21.011636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:23:21.011649 | orchestrator | changed: [testbed-node-0] 2025-08-29 18:23:21.011661 | orchestrator | 2025-08-29 18:23:21.011674 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 18:23:21.011686 | orchestrator | skipping: no hosts matched 2025-08-29 18:23:21.011699 | orchestrator | 2025-08-29 18:23:21.011711 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 18:23:21.011723 | orchestrator | skipping: no hosts matched 2025-08-29 18:23:21.011736 | orchestrator | 2025-08-29 18:23:21.011748 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 18:23:21.011760 | orchestrator | skipping: no hosts matched 2025-08-29 18:23:21.011773 | orchestrator | 2025-08-29 18:23:21.011785 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 18:23:21.011798 | orchestrator | 2025-08-29 18:23:21.011811 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 18:23:21.011823 | orchestrator | Friday 29 August 2025 18:23:19 +0000 (0:00:18.232) 0:00:23.576 ********* 2025-08-29 18:23:21.011836 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:23:21.011856 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:23:21.011868 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:23:21.011878 | orchestrator | 2025-08-29 18:23:21.011889 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 18:23:21.011899 | orchestrator | Friday 29 August 2025 18:23:20 +0000 (0:00:00.313) 0:00:23.890 ********* 2025-08-29 18:23:21.011910 | orchestrator | skipping: [testbed-node-0] 2025-08-29 18:23:21.011921 | orchestrator | skipping: [testbed-node-1] 2025-08-29 18:23:21.011932 | orchestrator | skipping: [testbed-node-2] 2025-08-29 18:23:21.011942 | orchestrator | 2025-08-29 18:23:21.011953 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:23:21.011965 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 18:23:21.011976 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 18:23:21.011988 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 18:23:21.011998 | orchestrator | 2025-08-29 18:23:21.012009 | orchestrator | 2025-08-29 18:23:21.012020 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:23:21.012031 | orchestrator | Friday 29 August 2025 18:23:20 +0000 (0:00:00.432) 0:00:24.323 ********* 2025-08-29 18:23:21.012041 | orchestrator | =============================================================================== 2025-08-29 18:23:21.012052 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.23s 2025-08-29 18:23:21.012079 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.21s 2025-08-29 18:23:21.012091 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-08-29 18:23:21.012102 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2025-08-29 18:23:21.012113 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2025-08-29 18:23:21.012124 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-08-29 18:23:21.012134 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-08-29 18:23:21.012145 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-08-29 18:23:21.309564 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-08-29 18:23:21.317958 | orchestrator | + set -e 2025-08-29 18:23:21.318164 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 18:23:21.318183 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 18:23:21.319068 | orchestrator | ++ INTERACTIVE=false 2025-08-29 18:23:21.319090 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 18:23:21.319102 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 18:23:21.319114 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-08-29 18:23:21.319795 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-08-29 18:23:21.323752 | orchestrator | 2025-08-29 18:23:21.323792 | orchestrator | # OpenStack endpoints 2025-08-29 18:23:21.323805 | orchestrator | 2025-08-29 18:23:21.323816 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-08-29 18:23:21.323827 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-08-29 18:23:21.323837 | orchestrator | + export OS_CLOUD=admin 2025-08-29 18:23:21.323848 | orchestrator | + OS_CLOUD=admin 2025-08-29 18:23:21.323859 | orchestrator | + echo 2025-08-29 18:23:21.323870 | orchestrator | + echo '# OpenStack endpoints' 2025-08-29 18:23:21.323880 | orchestrator | + echo 2025-08-29 18:23:21.323891 | orchestrator | + openstack endpoint list 2025-08-29 18:23:24.696986 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 18:23:24.697095 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-08-29 18:23:24.697135 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 18:23:24.697147 | orchestrator | | 0b03393562264382856f3047df08568f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 18:23:24.697158 | orchestrator | | 15c409998d544c0db1bc80e19b197d44 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-08-29 18:23:24.697186 | orchestrator | | 1d0bd94ae2414317bb58e3b8768e6f27 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-08-29 18:23:24.697198 | orchestrator | | 2ae1f0e710e74bb7981e0e026c5f1ec2 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-08-29 18:23:24.697209 | orchestrator | | 32d1dbefaba44bd7b1db68e139d4c9c2 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-08-29 18:23:24.697220 | orchestrator | | 32fcb6b523e34c8a8bcd9512121b16a6 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-08-29 18:23:24.697231 | orchestrator | | 740ae1e6038a45c0ad8c2402d777f69c | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-08-29 18:23:24.697246 | orchestrator | | 7a4a53267d7d495d84e5b87ec90942a7 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-08-29 18:23:24.697257 | orchestrator | | 7a9d4f1cf98041ec845be00f037936e1 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-08-29 18:23:24.697267 | orchestrator | | 81a51d36f84c475babf389d72f4514a4 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-08-29 18:23:24.697278 | orchestrator | | 86d9685073334c8aa2c39cfaa6dabfd8 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-08-29 18:23:24.697288 | orchestrator | | 942a6d96721a46ca8a3e8aa44743a565 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-08-29 18:23:24.697299 | orchestrator | | a29ec6631db24412b4fa5e0f3a285c90 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-08-29 18:23:24.697310 | orchestrator | | a9bf06183cf0460d90eaab7b788b1b3a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-08-29 18:23:24.697321 | orchestrator | | ad84fd35a5de4a858874a6c4cef0e05e | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-08-29 18:23:24.697387 | orchestrator | | bf8c71d747c7489e8e57c995fa4d634a | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-08-29 18:23:24.697401 | orchestrator | | cb870ab1297b426f80650f2ea4ea2363 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-08-29 18:23:24.697411 | orchestrator | | de300e85de434eef969381b625290f5a | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-08-29 18:23:24.697422 | orchestrator | | f08265795451449ba800a9197d20f4bb | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 18:23:24.697432 | orchestrator | | f5ad83ea6fa34daf8a4f8f6bb77ae89c | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-08-29 18:23:24.697467 | orchestrator | | f7f414fcda444d199820a9a1b32a448c | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-08-29 18:23:24.697479 | orchestrator | | fd88c055058e4fcb87c62c2f9b7e965e | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-08-29 18:23:24.697489 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-08-29 18:23:24.960041 | orchestrator | 2025-08-29 18:23:24.960127 | orchestrator | # Cinder 2025-08-29 18:23:24.960141 | orchestrator | 2025-08-29 18:23:24.960153 | orchestrator | + echo 2025-08-29 18:23:24.960165 | orchestrator | + echo '# Cinder' 2025-08-29 18:23:24.960176 | orchestrator | + echo 2025-08-29 18:23:24.960187 | orchestrator | + openstack volume service list 2025-08-29 18:23:27.765719 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 18:23:27.765818 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 18:23:27.765830 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 18:23:27.765841 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T18:23:21.000000 | 2025-08-29 18:23:27.765850 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T18:23:24.000000 | 2025-08-29 18:23:27.765860 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T18:23:24.000000 | 2025-08-29 18:23:27.765869 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-08-29T18:23:24.000000 | 2025-08-29 18:23:27.765879 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-08-29T18:23:24.000000 | 2025-08-29 18:23:27.765888 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-08-29T18:23:18.000000 | 2025-08-29 18:23:27.765898 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-08-29T18:23:23.000000 | 2025-08-29 18:23:27.765907 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-08-29T18:23:24.000000 | 2025-08-29 18:23:27.765916 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-08-29T18:23:25.000000 | 2025-08-29 18:23:27.765943 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-08-29 18:23:28.029008 | orchestrator | 2025-08-29 18:23:28.029106 | orchestrator | # Neutron 2025-08-29 18:23:28.029122 | orchestrator | 2025-08-29 18:23:28.029134 | orchestrator | + echo 2025-08-29 18:23:28.029146 | orchestrator | + echo '# Neutron' 2025-08-29 18:23:28.029157 | orchestrator | + echo 2025-08-29 18:23:28.029168 | orchestrator | + openstack network agent list 2025-08-29 18:23:30.847860 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 18:23:30.847968 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-08-29 18:23:30.847983 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 18:23:30.847995 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-08-29 18:23:30.848006 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-08-29 18:23:30.848017 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-08-29 18:23:30.848055 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-08-29 18:23:30.848067 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-08-29 18:23:30.848078 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-08-29 18:23:30.848088 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 18:23:30.848099 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 18:23:30.848110 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-08-29 18:23:30.848120 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-08-29 18:23:31.122979 | orchestrator | + openstack network service provider list 2025-08-29 18:23:33.644041 | orchestrator | +---------------+------+---------+ 2025-08-29 18:23:33.644142 | orchestrator | | Service Type | Name | Default | 2025-08-29 18:23:33.644156 | orchestrator | +---------------+------+---------+ 2025-08-29 18:23:33.644167 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-08-29 18:23:33.644178 | orchestrator | +---------------+------+---------+ 2025-08-29 18:23:33.919017 | orchestrator | 2025-08-29 18:23:33.919098 | orchestrator | # Nova 2025-08-29 18:23:33.919111 | orchestrator | 2025-08-29 18:23:33.919122 | orchestrator | + echo 2025-08-29 18:23:33.919133 | orchestrator | + echo '# Nova' 2025-08-29 18:23:33.919144 | orchestrator | + echo 2025-08-29 18:23:33.919155 | orchestrator | + openstack compute service list 2025-08-29 18:23:37.234287 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 18:23:37.234429 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-08-29 18:23:37.234446 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 18:23:37.234458 | orchestrator | | 782b4f50-6361-4f9b-87c4-c85ad6e16598 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-08-29T18:23:34.000000 | 2025-08-29 18:23:37.234468 | orchestrator | | 62830eb0-1833-4656-a6da-1148fe4e87e0 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-08-29T18:23:36.000000 | 2025-08-29 18:23:37.234479 | orchestrator | | 4c861b1d-f3b4-4e21-bf67-4f63debe9493 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-08-29T18:23:34.000000 | 2025-08-29 18:23:37.234489 | orchestrator | | 6761283c-739f-4d33-8599-01c1ca580504 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-08-29T18:23:34.000000 | 2025-08-29 18:23:37.234500 | orchestrator | | f5f5832b-26be-4760-9e92-c60001a530f5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-08-29T18:23:36.000000 | 2025-08-29 18:23:37.234511 | orchestrator | | 25062870-19a6-42e7-a912-5c7e50d1ceab | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-08-29T18:23:31.000000 | 2025-08-29 18:23:37.234522 | orchestrator | | a4dd1bfc-7268-4d69-b3a8-1bd6965c4f2b | nova-compute | testbed-node-3 | nova | enabled | up | 2025-08-29T18:23:36.000000 | 2025-08-29 18:23:37.234532 | orchestrator | | 46107dc9-b1a9-4af7-a6e5-cea1073671a6 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-08-29T18:23:36.000000 | 2025-08-29 18:23:37.234543 | orchestrator | | 440505fb-3360-4810-ba0f-47061ce34c00 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-08-29T18:23:27.000000 | 2025-08-29 18:23:37.234573 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-08-29 18:23:37.504536 | orchestrator | + openstack hypervisor list 2025-08-29 18:23:41.894297 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 18:23:41.894447 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-08-29 18:23:41.894463 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 18:23:41.894475 | orchestrator | | ab253b87-caa1-4b94-a53a-ec2637f7095f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-08-29 18:23:41.894486 | orchestrator | | ff8ee920-6778-4ce2-af7e-fba8f4ce5406 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-08-29 18:23:41.894497 | orchestrator | | 80580891-b54c-4b43-837e-297399c6855b | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-08-29 18:23:41.894508 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-08-29 18:23:42.186749 | orchestrator | 2025-08-29 18:23:42.186843 | orchestrator | # Run OpenStack test play 2025-08-29 18:23:42.186858 | orchestrator | 2025-08-29 18:23:42.186870 | orchestrator | + echo 2025-08-29 18:23:42.186882 | orchestrator | + echo '# Run OpenStack test play' 2025-08-29 18:23:42.186894 | orchestrator | + echo 2025-08-29 18:23:42.186906 | orchestrator | + osism apply --environment openstack test 2025-08-29 18:23:44.125747 | orchestrator | 2025-08-29 18:23:44 | INFO  | Trying to run play test in environment openstack 2025-08-29 18:23:54.342731 | orchestrator | 2025-08-29 18:23:54 | INFO  | Task 91050391-7717-44ca-9f5f-1b707a610882 (test) was prepared for execution. 2025-08-29 18:23:54.342844 | orchestrator | 2025-08-29 18:23:54 | INFO  | It takes a moment until task 91050391-7717-44ca-9f5f-1b707a610882 (test) has been started and output is visible here. 2025-08-29 18:29:42.468252 | orchestrator | 2025-08-29 18:29:42.468407 | orchestrator | PLAY [Create test project] ***************************************************** 2025-08-29 18:29:42.468426 | orchestrator | 2025-08-29 18:29:42.468438 | orchestrator | TASK [Create test domain] ****************************************************** 2025-08-29 18:29:42.468450 | orchestrator | Friday 29 August 2025 18:23:58 +0000 (0:00:00.078) 0:00:00.078 ********* 2025-08-29 18:29:42.468461 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468473 | orchestrator | 2025-08-29 18:29:42.468484 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-08-29 18:29:42.468494 | orchestrator | Friday 29 August 2025 18:24:02 +0000 (0:00:03.657) 0:00:03.736 ********* 2025-08-29 18:29:42.468505 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468560 | orchestrator | 2025-08-29 18:29:42.468574 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-08-29 18:29:42.468586 | orchestrator | Friday 29 August 2025 18:24:06 +0000 (0:00:04.158) 0:00:07.894 ********* 2025-08-29 18:29:42.468597 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468608 | orchestrator | 2025-08-29 18:29:42.468618 | orchestrator | TASK [Create test project] ***************************************************** 2025-08-29 18:29:42.468629 | orchestrator | Friday 29 August 2025 18:24:12 +0000 (0:00:06.322) 0:00:14.216 ********* 2025-08-29 18:29:42.468642 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468654 | orchestrator | 2025-08-29 18:29:42.468667 | orchestrator | TASK [Create test user] ******************************************************** 2025-08-29 18:29:42.468679 | orchestrator | Friday 29 August 2025 18:24:16 +0000 (0:00:04.034) 0:00:18.251 ********* 2025-08-29 18:29:42.468691 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468703 | orchestrator | 2025-08-29 18:29:42.468716 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-08-29 18:29:42.468729 | orchestrator | Friday 29 August 2025 18:24:20 +0000 (0:00:04.043) 0:00:22.294 ********* 2025-08-29 18:29:42.468741 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-08-29 18:29:42.468775 | orchestrator | changed: [localhost] => (item=member) 2025-08-29 18:29:42.468803 | orchestrator | changed: [localhost] => (item=creator) 2025-08-29 18:29:42.468841 | orchestrator | 2025-08-29 18:29:42.468855 | orchestrator | TASK [Create test server group] ************************************************ 2025-08-29 18:29:42.468867 | orchestrator | Friday 29 August 2025 18:24:32 +0000 (0:00:11.798) 0:00:34.093 ********* 2025-08-29 18:29:42.468879 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468892 | orchestrator | 2025-08-29 18:29:42.468904 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-08-29 18:29:42.468917 | orchestrator | Friday 29 August 2025 18:24:36 +0000 (0:00:04.268) 0:00:38.362 ********* 2025-08-29 18:29:42.468929 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468941 | orchestrator | 2025-08-29 18:29:42.468953 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-08-29 18:29:42.468965 | orchestrator | Friday 29 August 2025 18:24:41 +0000 (0:00:04.809) 0:00:43.171 ********* 2025-08-29 18:29:42.468977 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.468989 | orchestrator | 2025-08-29 18:29:42.469001 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-08-29 18:29:42.469011 | orchestrator | Friday 29 August 2025 18:24:45 +0000 (0:00:04.154) 0:00:47.326 ********* 2025-08-29 18:29:42.469022 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.469032 | orchestrator | 2025-08-29 18:29:42.469043 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-08-29 18:29:42.469053 | orchestrator | Friday 29 August 2025 18:24:49 +0000 (0:00:03.889) 0:00:51.216 ********* 2025-08-29 18:29:42.469064 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.469074 | orchestrator | 2025-08-29 18:29:42.469085 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-08-29 18:29:42.469096 | orchestrator | Friday 29 August 2025 18:24:53 +0000 (0:00:04.029) 0:00:55.245 ********* 2025-08-29 18:29:42.469106 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.469116 | orchestrator | 2025-08-29 18:29:42.469127 | orchestrator | TASK [Create test network topology] ******************************************** 2025-08-29 18:29:42.469152 | orchestrator | Friday 29 August 2025 18:24:57 +0000 (0:00:03.807) 0:00:59.053 ********* 2025-08-29 18:29:42.469163 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.469174 | orchestrator | 2025-08-29 18:29:42.469184 | orchestrator | TASK [Create test instances] *************************************************** 2025-08-29 18:29:42.469195 | orchestrator | Friday 29 August 2025 18:25:11 +0000 (0:00:13.800) 0:01:12.854 ********* 2025-08-29 18:29:42.469206 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 18:29:42.469217 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 18:29:42.469227 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 18:29:42.469237 | orchestrator | 2025-08-29 18:29:42.469248 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 18:29:42.469259 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 18:29:42.469270 | orchestrator | 2025-08-29 18:29:42.469281 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-08-29 18:29:42.469292 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 18:29:42.469302 | orchestrator | 2025-08-29 18:29:42.469313 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-08-29 18:29:42.469323 | orchestrator | Friday 29 August 2025 18:28:19 +0000 (0:03:08.271) 0:04:21.125 ********* 2025-08-29 18:29:42.469334 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 18:29:42.469344 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 18:29:42.469355 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 18:29:42.469366 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 18:29:42.469376 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 18:29:42.469387 | orchestrator | 2025-08-29 18:29:42.469397 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-08-29 18:29:42.469408 | orchestrator | Friday 29 August 2025 18:28:42 +0000 (0:00:23.497) 0:04:44.623 ********* 2025-08-29 18:29:42.469419 | orchestrator | changed: [localhost] => (item=test) 2025-08-29 18:29:42.469430 | orchestrator | changed: [localhost] => (item=test-1) 2025-08-29 18:29:42.469449 | orchestrator | changed: [localhost] => (item=test-2) 2025-08-29 18:29:42.469460 | orchestrator | changed: [localhost] => (item=test-3) 2025-08-29 18:29:42.469490 | orchestrator | changed: [localhost] => (item=test-4) 2025-08-29 18:29:42.469501 | orchestrator | 2025-08-29 18:29:42.469517 | orchestrator | TASK [Create test volume] ****************************************************** 2025-08-29 18:29:42.469528 | orchestrator | Friday 29 August 2025 18:29:16 +0000 (0:00:33.746) 0:05:18.370 ********* 2025-08-29 18:29:42.469538 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.469549 | orchestrator | 2025-08-29 18:29:42.469560 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-08-29 18:29:42.469571 | orchestrator | Friday 29 August 2025 18:29:23 +0000 (0:00:06.834) 0:05:25.205 ********* 2025-08-29 18:29:42.469581 | orchestrator | changed: [localhost] 2025-08-29 18:29:42.469592 | orchestrator | 2025-08-29 18:29:42.469603 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-08-29 18:29:42.469614 | orchestrator | Friday 29 August 2025 18:29:37 +0000 (0:00:13.566) 0:05:38.771 ********* 2025-08-29 18:29:42.469625 | orchestrator | ok: [localhost] 2025-08-29 18:29:42.469636 | orchestrator | 2025-08-29 18:29:42.469647 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-08-29 18:29:42.469658 | orchestrator | Friday 29 August 2025 18:29:42 +0000 (0:00:05.076) 0:05:43.848 ********* 2025-08-29 18:29:42.469669 | orchestrator | ok: [localhost] => { 2025-08-29 18:29:42.469679 | orchestrator |  "msg": "192.168.112.122" 2025-08-29 18:29:42.469690 | orchestrator | } 2025-08-29 18:29:42.469702 | orchestrator | 2025-08-29 18:29:42.469713 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 18:29:42.469724 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 18:29:42.469736 | orchestrator | 2025-08-29 18:29:42.469746 | orchestrator | 2025-08-29 18:29:42.469787 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 18:29:42.469798 | orchestrator | Friday 29 August 2025 18:29:42 +0000 (0:00:00.039) 0:05:43.887 ********* 2025-08-29 18:29:42.469809 | orchestrator | =============================================================================== 2025-08-29 18:29:42.469820 | orchestrator | Create test instances ------------------------------------------------- 188.27s 2025-08-29 18:29:42.469830 | orchestrator | Add tag to instances --------------------------------------------------- 33.75s 2025-08-29 18:29:42.469841 | orchestrator | Add metadata to instances ---------------------------------------------- 23.50s 2025-08-29 18:29:42.469851 | orchestrator | Create test network topology ------------------------------------------- 13.80s 2025-08-29 18:29:42.469862 | orchestrator | Attach test volume ----------------------------------------------------- 13.57s 2025-08-29 18:29:42.469872 | orchestrator | Add member roles to user test ------------------------------------------ 11.80s 2025-08-29 18:29:42.469883 | orchestrator | Create test volume ------------------------------------------------------ 6.83s 2025-08-29 18:29:42.469894 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.32s 2025-08-29 18:29:42.469904 | orchestrator | Create floating ip address ---------------------------------------------- 5.08s 2025-08-29 18:29:42.469915 | orchestrator | Create ssh security group ----------------------------------------------- 4.81s 2025-08-29 18:29:42.469925 | orchestrator | Create test server group ------------------------------------------------ 4.27s 2025-08-29 18:29:42.469936 | orchestrator | Create test-admin user -------------------------------------------------- 4.16s 2025-08-29 18:29:42.469946 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.15s 2025-08-29 18:29:42.469957 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2025-08-29 18:29:42.469967 | orchestrator | Create test project ----------------------------------------------------- 4.03s 2025-08-29 18:29:42.469978 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.03s 2025-08-29 18:29:42.469996 | orchestrator | Create icmp security group ---------------------------------------------- 3.89s 2025-08-29 18:29:42.470006 | orchestrator | Create test keypair ----------------------------------------------------- 3.81s 2025-08-29 18:29:42.470067 | orchestrator | Create test domain ------------------------------------------------------ 3.66s 2025-08-29 18:29:42.470081 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-08-29 18:29:42.767179 | orchestrator | + server_list 2025-08-29 18:29:42.767269 | orchestrator | + openstack --os-cloud test server list 2025-08-29 18:29:46.504392 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 18:29:46.504499 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-08-29 18:29:46.504514 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 18:29:46.504525 | orchestrator | | 1450394a-5dd9-4363-ad89-1f4c92fdc1b4 | test-4 | ACTIVE | auto_allocated_network=10.42.0.6, 192.168.112.129 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 18:29:46.504536 | orchestrator | | 40abd621-999e-46c1-9841-e76d0ecfbe94 | test-3 | ACTIVE | auto_allocated_network=10.42.0.57, 192.168.112.126 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 18:29:46.504546 | orchestrator | | 617ec63a-e6d2-4439-bada-1836211ba5ba | test-2 | ACTIVE | auto_allocated_network=10.42.0.62, 192.168.112.198 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 18:29:46.504557 | orchestrator | | 50122c94-c02a-4ad0-8aed-65401da4a3c6 | test-1 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.147 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 18:29:46.504588 | orchestrator | | f8e7b7b1-c567-465a-93df-aaacb34db4ff | test | ACTIVE | auto_allocated_network=10.42.0.11, 192.168.112.122 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-08-29 18:29:46.504600 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-08-29 18:29:46.780239 | orchestrator | + openstack --os-cloud test server show test 2025-08-29 18:29:49.930743 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:49.930898 | orchestrator | | Field | Value | 2025-08-29 18:29:49.930914 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:49.930926 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 18:29:49.930938 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 18:29:49.930949 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 18:29:49.930979 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-08-29 18:29:49.930998 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 18:29:49.931010 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 18:29:49.931021 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 18:29:49.931032 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 18:29:49.931059 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 18:29:49.931071 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 18:29:49.931082 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 18:29:49.931093 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 18:29:49.931104 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 18:29:49.931122 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 18:29:49.931133 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 18:29:49.931148 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T18:25:39.000000 | 2025-08-29 18:29:49.931159 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 18:29:49.931170 | orchestrator | | accessIPv4 | | 2025-08-29 18:29:49.931181 | orchestrator | | accessIPv6 | | 2025-08-29 18:29:49.931192 | orchestrator | | addresses | auto_allocated_network=10.42.0.11, 192.168.112.122 | 2025-08-29 18:29:49.931209 | orchestrator | | config_drive | | 2025-08-29 18:29:49.931221 | orchestrator | | created | 2025-08-29T18:25:19Z | 2025-08-29 18:29:49.931232 | orchestrator | | description | None | 2025-08-29 18:29:49.931243 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 18:29:49.931259 | orchestrator | | hostId | 5d91d881e96da40ce0a83ef3d10a075fe589260c2d213efb6c2d212d | 2025-08-29 18:29:49.931272 | orchestrator | | host_status | None | 2025-08-29 18:29:49.931285 | orchestrator | | id | f8e7b7b1-c567-465a-93df-aaacb34db4ff | 2025-08-29 18:29:49.931302 | orchestrator | | image | Cirros 0.6.2 (33ccf1db-8e26-437f-8abc-94813af994af) | 2025-08-29 18:29:49.931315 | orchestrator | | key_name | test | 2025-08-29 18:29:49.931328 | orchestrator | | locked | False | 2025-08-29 18:29:49.931340 | orchestrator | | locked_reason | None | 2025-08-29 18:29:49.931353 | orchestrator | | name | test | 2025-08-29 18:29:49.931371 | orchestrator | | pinned_availability_zone | None | 2025-08-29 18:29:49.931384 | orchestrator | | progress | 0 | 2025-08-29 18:29:49.931397 | orchestrator | | project_id | 9de68ae0ee3c410c90c8686acdbb3581 | 2025-08-29 18:29:49.931415 | orchestrator | | properties | hostname='test' | 2025-08-29 18:29:49.931428 | orchestrator | | security_groups | name='ssh' | 2025-08-29 18:29:49.931440 | orchestrator | | | name='icmp' | 2025-08-29 18:29:49.931452 | orchestrator | | server_groups | None | 2025-08-29 18:29:49.931468 | orchestrator | | status | ACTIVE | 2025-08-29 18:29:49.931481 | orchestrator | | tags | test | 2025-08-29 18:29:49.931493 | orchestrator | | trusted_image_certificates | None | 2025-08-29 18:29:49.931506 | orchestrator | | updated | 2025-08-29T18:28:24Z | 2025-08-29 18:29:49.931523 | orchestrator | | user_id | e25fa259b68c4594bcf3769ff079331b | 2025-08-29 18:29:49.931536 | orchestrator | | volumes_attached | delete_on_termination='False', id='1aa23791-6d8c-4d79-a14d-696835744dd0' | 2025-08-29 18:29:49.934227 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:50.193060 | orchestrator | + openstack --os-cloud test server show test-1 2025-08-29 18:29:53.191708 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:53.191845 | orchestrator | | Field | Value | 2025-08-29 18:29:53.191862 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:53.191874 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 18:29:53.191885 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 18:29:53.191896 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 18:29:53.191907 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-08-29 18:29:53.191920 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 18:29:53.191940 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 18:29:53.191963 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 18:29:53.191997 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 18:29:53.192048 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 18:29:53.192061 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 18:29:53.192072 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 18:29:53.192083 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 18:29:53.192094 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 18:29:53.192110 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 18:29:53.192121 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 18:29:53.192132 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T18:26:23.000000 | 2025-08-29 18:29:53.192142 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 18:29:53.192153 | orchestrator | | accessIPv4 | | 2025-08-29 18:29:53.192172 | orchestrator | | accessIPv6 | | 2025-08-29 18:29:53.192184 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.147 | 2025-08-29 18:29:53.192202 | orchestrator | | config_drive | | 2025-08-29 18:29:53.192213 | orchestrator | | created | 2025-08-29T18:26:02Z | 2025-08-29 18:29:53.192224 | orchestrator | | description | None | 2025-08-29 18:29:53.192235 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 18:29:53.192252 | orchestrator | | hostId | 5dffdddb4826e576676cd4661a4f88aa453608ad2ec5578748944b30 | 2025-08-29 18:29:53.192264 | orchestrator | | host_status | None | 2025-08-29 18:29:53.192277 | orchestrator | | id | 50122c94-c02a-4ad0-8aed-65401da4a3c6 | 2025-08-29 18:29:53.192289 | orchestrator | | image | Cirros 0.6.2 (33ccf1db-8e26-437f-8abc-94813af994af) | 2025-08-29 18:29:53.192301 | orchestrator | | key_name | test | 2025-08-29 18:29:53.192322 | orchestrator | | locked | False | 2025-08-29 18:29:53.192335 | orchestrator | | locked_reason | None | 2025-08-29 18:29:53.192347 | orchestrator | | name | test-1 | 2025-08-29 18:29:53.192366 | orchestrator | | pinned_availability_zone | None | 2025-08-29 18:29:53.192379 | orchestrator | | progress | 0 | 2025-08-29 18:29:53.192391 | orchestrator | | project_id | 9de68ae0ee3c410c90c8686acdbb3581 | 2025-08-29 18:29:53.192404 | orchestrator | | properties | hostname='test-1' | 2025-08-29 18:29:53.192421 | orchestrator | | security_groups | name='ssh' | 2025-08-29 18:29:53.192434 | orchestrator | | | name='icmp' | 2025-08-29 18:29:53.192447 | orchestrator | | server_groups | None | 2025-08-29 18:29:53.192459 | orchestrator | | status | ACTIVE | 2025-08-29 18:29:53.192486 | orchestrator | | tags | test | 2025-08-29 18:29:53.192500 | orchestrator | | trusted_image_certificates | None | 2025-08-29 18:29:53.192513 | orchestrator | | updated | 2025-08-29T18:28:29Z | 2025-08-29 18:29:53.192531 | orchestrator | | user_id | e25fa259b68c4594bcf3769ff079331b | 2025-08-29 18:29:53.192544 | orchestrator | | volumes_attached | | 2025-08-29 18:29:53.196400 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:53.452027 | orchestrator | + openstack --os-cloud test server show test-2 2025-08-29 18:29:56.617992 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:56.618146 | orchestrator | | Field | Value | 2025-08-29 18:29:56.618170 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:56.618181 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 18:29:56.618211 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 18:29:56.618221 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 18:29:56.618231 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-08-29 18:29:56.618241 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 18:29:56.618250 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 18:29:56.618260 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 18:29:56.618270 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 18:29:56.618295 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 18:29:56.618305 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 18:29:56.618319 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 18:29:56.618329 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 18:29:56.618346 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 18:29:56.618356 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 18:29:56.618366 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 18:29:56.618375 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T18:27:02.000000 | 2025-08-29 18:29:56.618385 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 18:29:56.618395 | orchestrator | | accessIPv4 | | 2025-08-29 18:29:56.618405 | orchestrator | | accessIPv6 | | 2025-08-29 18:29:56.618414 | orchestrator | | addresses | auto_allocated_network=10.42.0.62, 192.168.112.198 | 2025-08-29 18:29:56.618430 | orchestrator | | config_drive | | 2025-08-29 18:29:56.618440 | orchestrator | | created | 2025-08-29T18:26:41Z | 2025-08-29 18:29:56.618450 | orchestrator | | description | None | 2025-08-29 18:29:56.618465 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 18:29:56.618481 | orchestrator | | hostId | 781cfb27e8de6d0c2c58cb936891897e31572f260509318b5c34bdcc | 2025-08-29 18:29:56.618491 | orchestrator | | host_status | None | 2025-08-29 18:29:56.618501 | orchestrator | | id | 617ec63a-e6d2-4439-bada-1836211ba5ba | 2025-08-29 18:29:56.618511 | orchestrator | | image | Cirros 0.6.2 (33ccf1db-8e26-437f-8abc-94813af994af) | 2025-08-29 18:29:56.618521 | orchestrator | | key_name | test | 2025-08-29 18:29:56.618530 | orchestrator | | locked | False | 2025-08-29 18:29:56.618540 | orchestrator | | locked_reason | None | 2025-08-29 18:29:56.618550 | orchestrator | | name | test-2 | 2025-08-29 18:29:56.618564 | orchestrator | | pinned_availability_zone | None | 2025-08-29 18:29:56.618574 | orchestrator | | progress | 0 | 2025-08-29 18:29:56.618593 | orchestrator | | project_id | 9de68ae0ee3c410c90c8686acdbb3581 | 2025-08-29 18:29:56.618603 | orchestrator | | properties | hostname='test-2' | 2025-08-29 18:29:56.618613 | orchestrator | | security_groups | name='ssh' | 2025-08-29 18:29:56.618622 | orchestrator | | | name='icmp' | 2025-08-29 18:29:56.618632 | orchestrator | | server_groups | None | 2025-08-29 18:29:56.618641 | orchestrator | | status | ACTIVE | 2025-08-29 18:29:56.618651 | orchestrator | | tags | test | 2025-08-29 18:29:56.618661 | orchestrator | | trusted_image_certificates | None | 2025-08-29 18:29:56.618671 | orchestrator | | updated | 2025-08-29T18:28:33Z | 2025-08-29 18:29:56.618685 | orchestrator | | user_id | e25fa259b68c4594bcf3769ff079331b | 2025-08-29 18:29:56.618702 | orchestrator | | volumes_attached | | 2025-08-29 18:29:56.622425 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:29:56.905536 | orchestrator | + openstack --os-cloud test server show test-3 2025-08-29 18:30:00.104288 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:30:00.104396 | orchestrator | | Field | Value | 2025-08-29 18:30:00.104411 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:30:00.104423 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 18:30:00.104434 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 18:30:00.104445 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 18:30:00.104457 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-08-29 18:30:00.104468 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 18:30:00.104479 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 18:30:00.104514 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 18:30:00.104527 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 18:30:00.104569 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 18:30:00.104581 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 18:30:00.104592 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 18:30:00.104604 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 18:30:00.104615 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 18:30:00.104626 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 18:30:00.104637 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 18:30:00.104648 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T18:27:36.000000 | 2025-08-29 18:30:00.104659 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 18:30:00.104678 | orchestrator | | accessIPv4 | | 2025-08-29 18:30:00.104689 | orchestrator | | accessIPv6 | | 2025-08-29 18:30:00.104701 | orchestrator | | addresses | auto_allocated_network=10.42.0.57, 192.168.112.126 | 2025-08-29 18:30:00.104723 | orchestrator | | config_drive | | 2025-08-29 18:30:00.104735 | orchestrator | | created | 2025-08-29T18:27:20Z | 2025-08-29 18:30:00.104747 | orchestrator | | description | None | 2025-08-29 18:30:00.104758 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 18:30:00.104839 | orchestrator | | hostId | 5dffdddb4826e576676cd4661a4f88aa453608ad2ec5578748944b30 | 2025-08-29 18:30:00.104855 | orchestrator | | host_status | None | 2025-08-29 18:30:00.104868 | orchestrator | | id | 40abd621-999e-46c1-9841-e76d0ecfbe94 | 2025-08-29 18:30:00.104889 | orchestrator | | image | Cirros 0.6.2 (33ccf1db-8e26-437f-8abc-94813af994af) | 2025-08-29 18:30:00.104902 | orchestrator | | key_name | test | 2025-08-29 18:30:00.104915 | orchestrator | | locked | False | 2025-08-29 18:30:00.104927 | orchestrator | | locked_reason | None | 2025-08-29 18:30:00.104946 | orchestrator | | name | test-3 | 2025-08-29 18:30:00.104967 | orchestrator | | pinned_availability_zone | None | 2025-08-29 18:30:00.104980 | orchestrator | | progress | 0 | 2025-08-29 18:30:00.104993 | orchestrator | | project_id | 9de68ae0ee3c410c90c8686acdbb3581 | 2025-08-29 18:30:00.105006 | orchestrator | | properties | hostname='test-3' | 2025-08-29 18:30:00.105018 | orchestrator | | security_groups | name='ssh' | 2025-08-29 18:30:00.105031 | orchestrator | | | name='icmp' | 2025-08-29 18:30:00.105050 | orchestrator | | server_groups | None | 2025-08-29 18:30:00.105063 | orchestrator | | status | ACTIVE | 2025-08-29 18:30:00.105075 | orchestrator | | tags | test | 2025-08-29 18:30:00.105088 | orchestrator | | trusted_image_certificates | None | 2025-08-29 18:30:00.105101 | orchestrator | | updated | 2025-08-29T18:28:38Z | 2025-08-29 18:30:00.105119 | orchestrator | | user_id | e25fa259b68c4594bcf3769ff079331b | 2025-08-29 18:30:00.105133 | orchestrator | | volumes_attached | | 2025-08-29 18:30:00.108676 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:30:00.372114 | orchestrator | + openstack --os-cloud test server show test-4 2025-08-29 18:30:03.739036 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:30:03.739129 | orchestrator | | Field | Value | 2025-08-29 18:30:03.739144 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:30:03.739179 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-08-29 18:30:03.739206 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-08-29 18:30:03.739217 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-08-29 18:30:03.739228 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-08-29 18:30:03.739239 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-08-29 18:30:03.739254 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-08-29 18:30:03.739265 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-08-29 18:30:03.739276 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-08-29 18:30:03.739303 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-08-29 18:30:03.739315 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-08-29 18:30:03.739326 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-08-29 18:30:03.739345 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-08-29 18:30:03.739356 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-08-29 18:30:03.739367 | orchestrator | | OS-EXT-STS:task_state | None | 2025-08-29 18:30:03.739378 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-08-29 18:30:03.739388 | orchestrator | | OS-SRV-USG:launched_at | 2025-08-29T18:28:08.000000 | 2025-08-29 18:30:03.739399 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-08-29 18:30:03.739415 | orchestrator | | accessIPv4 | | 2025-08-29 18:30:03.739426 | orchestrator | | accessIPv6 | | 2025-08-29 18:30:03.739437 | orchestrator | | addresses | auto_allocated_network=10.42.0.6, 192.168.112.129 | 2025-08-29 18:30:03.739454 | orchestrator | | config_drive | | 2025-08-29 18:30:03.739472 | orchestrator | | created | 2025-08-29T18:27:52Z | 2025-08-29 18:30:03.739484 | orchestrator | | description | None | 2025-08-29 18:30:03.739494 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-08-29 18:30:03.739505 | orchestrator | | hostId | 5d91d881e96da40ce0a83ef3d10a075fe589260c2d213efb6c2d212d | 2025-08-29 18:30:03.739517 | orchestrator | | host_status | None | 2025-08-29 18:30:03.739527 | orchestrator | | id | 1450394a-5dd9-4363-ad89-1f4c92fdc1b4 | 2025-08-29 18:30:03.739538 | orchestrator | | image | Cirros 0.6.2 (33ccf1db-8e26-437f-8abc-94813af994af) | 2025-08-29 18:30:03.739549 | orchestrator | | key_name | test | 2025-08-29 18:30:03.739565 | orchestrator | | locked | False | 2025-08-29 18:30:03.739576 | orchestrator | | locked_reason | None | 2025-08-29 18:30:03.739588 | orchestrator | | name | test-4 | 2025-08-29 18:30:03.739614 | orchestrator | | pinned_availability_zone | None | 2025-08-29 18:30:03.739627 | orchestrator | | progress | 0 | 2025-08-29 18:30:03.739640 | orchestrator | | project_id | 9de68ae0ee3c410c90c8686acdbb3581 | 2025-08-29 18:30:03.739653 | orchestrator | | properties | hostname='test-4' | 2025-08-29 18:30:03.739665 | orchestrator | | security_groups | name='ssh' | 2025-08-29 18:30:03.739678 | orchestrator | | | name='icmp' | 2025-08-29 18:30:03.739690 | orchestrator | | server_groups | None | 2025-08-29 18:30:03.739703 | orchestrator | | status | ACTIVE | 2025-08-29 18:30:03.739721 | orchestrator | | tags | test | 2025-08-29 18:30:03.739734 | orchestrator | | trusted_image_certificates | None | 2025-08-29 18:30:03.739747 | orchestrator | | updated | 2025-08-29T18:28:42Z | 2025-08-29 18:30:03.739771 | orchestrator | | user_id | e25fa259b68c4594bcf3769ff079331b | 2025-08-29 18:30:03.739809 | orchestrator | | volumes_attached | | 2025-08-29 18:30:03.743936 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-08-29 18:30:04.009025 | orchestrator | + server_ping 2025-08-29 18:30:04.011141 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-08-29 18:30:04.011174 | orchestrator | ++ tr -d '\r' 2025-08-29 18:30:07.013617 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 18:30:07.013701 | orchestrator | + ping -c3 192.168.112.129 2025-08-29 18:30:07.032777 | orchestrator | PING 192.168.112.129 (192.168.112.129) 56(84) bytes of data. 2025-08-29 18:30:07.032837 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=1 ttl=63 time=10.1 ms 2025-08-29 18:30:08.027952 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=2 ttl=63 time=2.85 ms 2025-08-29 18:30:09.029753 | orchestrator | 64 bytes from 192.168.112.129: icmp_seq=3 ttl=63 time=2.32 ms 2025-08-29 18:30:09.029856 | orchestrator | 2025-08-29 18:30:09.029873 | orchestrator | --- 192.168.112.129 ping statistics --- 2025-08-29 18:30:09.029885 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 18:30:09.029896 | orchestrator | rtt min/avg/max/mdev = 2.318/5.093/10.113/3.556 ms 2025-08-29 18:30:09.029916 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 18:30:09.029928 | orchestrator | + ping -c3 192.168.112.147 2025-08-29 18:30:09.042769 | orchestrator | PING 192.168.112.147 (192.168.112.147) 56(84) bytes of data. 2025-08-29 18:30:09.042816 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=1 ttl=63 time=9.34 ms 2025-08-29 18:30:10.037737 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=2 ttl=63 time=2.49 ms 2025-08-29 18:30:11.039780 | orchestrator | 64 bytes from 192.168.112.147: icmp_seq=3 ttl=63 time=2.05 ms 2025-08-29 18:30:11.039921 | orchestrator | 2025-08-29 18:30:11.039939 | orchestrator | --- 192.168.112.147 ping statistics --- 2025-08-29 18:30:11.039951 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 18:30:11.039962 | orchestrator | rtt min/avg/max/mdev = 2.051/4.628/9.340/3.336 ms 2025-08-29 18:30:11.039973 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 18:30:11.039984 | orchestrator | + ping -c3 192.168.112.198 2025-08-29 18:30:11.050651 | orchestrator | PING 192.168.112.198 (192.168.112.198) 56(84) bytes of data. 2025-08-29 18:30:11.050705 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=1 ttl=63 time=7.11 ms 2025-08-29 18:30:12.046908 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=2 ttl=63 time=1.80 ms 2025-08-29 18:30:13.049115 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=3 ttl=63 time=1.86 ms 2025-08-29 18:30:13.049201 | orchestrator | 2025-08-29 18:30:13.049218 | orchestrator | --- 192.168.112.198 ping statistics --- 2025-08-29 18:30:13.049231 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 18:30:13.049243 | orchestrator | rtt min/avg/max/mdev = 1.798/3.588/7.108/2.488 ms 2025-08-29 18:30:13.050138 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 18:30:13.050190 | orchestrator | + ping -c3 192.168.112.122 2025-08-29 18:30:13.062651 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-08-29 18:30:13.062679 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=4.97 ms 2025-08-29 18:30:14.061772 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.30 ms 2025-08-29 18:30:15.062345 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.92 ms 2025-08-29 18:30:15.062437 | orchestrator | 2025-08-29 18:30:15.062453 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-08-29 18:30:15.062466 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-08-29 18:30:15.062494 | orchestrator | rtt min/avg/max/mdev = 1.916/3.061/4.973/1.360 ms 2025-08-29 18:30:15.062507 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-08-29 18:30:15.062529 | orchestrator | + ping -c3 192.168.112.126 2025-08-29 18:30:15.075165 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2025-08-29 18:30:15.075190 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=8.22 ms 2025-08-29 18:30:16.071464 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.56 ms 2025-08-29 18:30:17.072843 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=2.19 ms 2025-08-29 18:30:17.072938 | orchestrator | 2025-08-29 18:30:17.072954 | orchestrator | --- 192.168.112.126 ping statistics --- 2025-08-29 18:30:17.072967 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-08-29 18:30:17.072979 | orchestrator | rtt min/avg/max/mdev = 2.193/4.326/8.223/2.759 ms 2025-08-29 18:30:17.073502 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-08-29 18:30:17.175892 | orchestrator | ok: Runtime: 0:10:22.063535 2025-08-29 18:30:17.208618 | 2025-08-29 18:30:17.208743 | TASK [Run tempest] 2025-08-29 18:30:17.743695 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:17.763544 | 2025-08-29 18:30:17.763739 | TASK [Check prometheus alert status] 2025-08-29 18:30:18.301483 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:18.304713 | 2025-08-29 18:30:18.304940 | PLAY RECAP 2025-08-29 18:30:18.305094 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-08-29 18:30:18.305166 | 2025-08-29 18:30:18.535040 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 18:30:18.536219 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 18:30:19.290422 | 2025-08-29 18:30:19.290587 | PLAY [Post output play] 2025-08-29 18:30:19.306605 | 2025-08-29 18:30:19.306740 | LOOP [stage-output : Register sources] 2025-08-29 18:30:19.359862 | 2025-08-29 18:30:19.360124 | TASK [stage-output : Check sudo] 2025-08-29 18:30:20.207643 | orchestrator | sudo: a password is required 2025-08-29 18:30:20.397799 | orchestrator | ok: Runtime: 0:00:00.013498 2025-08-29 18:30:20.412235 | 2025-08-29 18:30:20.412407 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 18:30:20.451642 | 2025-08-29 18:30:20.451953 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 18:30:20.530666 | orchestrator | ok 2025-08-29 18:30:20.539342 | 2025-08-29 18:30:20.539474 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 18:30:20.991978 | orchestrator | ok: "docs" 2025-08-29 18:30:20.992298 | 2025-08-29 18:30:21.230172 | orchestrator | ok: "artifacts" 2025-08-29 18:30:21.469506 | orchestrator | ok: "logs" 2025-08-29 18:30:21.494147 | 2025-08-29 18:30:21.494327 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 18:30:21.527075 | 2025-08-29 18:30:21.527283 | TASK [stage-output : Make all log files readable] 2025-08-29 18:30:21.808816 | orchestrator | ok 2025-08-29 18:30:21.818021 | 2025-08-29 18:30:21.818166 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 18:30:21.852921 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:21.868708 | 2025-08-29 18:30:21.868907 | TASK [stage-output : Discover log files for compression] 2025-08-29 18:30:21.892985 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:21.905034 | 2025-08-29 18:30:21.905184 | LOOP [stage-output : Archive everything from logs] 2025-08-29 18:30:21.950886 | 2025-08-29 18:30:21.951055 | PLAY [Post cleanup play] 2025-08-29 18:30:21.959559 | 2025-08-29 18:30:21.959663 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 18:30:22.014298 | orchestrator | ok 2025-08-29 18:30:22.025998 | 2025-08-29 18:30:22.026115 | TASK [Set cloud fact (local deployment)] 2025-08-29 18:30:22.049746 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:22.063428 | 2025-08-29 18:30:22.063560 | TASK [Clean the cloud environment] 2025-08-29 18:30:22.620271 | orchestrator | 2025-08-29 18:30:22 - clean up servers 2025-08-29 18:30:23.775500 | orchestrator | 2025-08-29 18:30:23 - testbed-manager 2025-08-29 18:30:23.863678 | orchestrator | 2025-08-29 18:30:23 - testbed-node-2 2025-08-29 18:30:23.958865 | orchestrator | 2025-08-29 18:30:23 - testbed-node-1 2025-08-29 18:30:24.063841 | orchestrator | 2025-08-29 18:30:24 - testbed-node-5 2025-08-29 18:30:24.162625 | orchestrator | 2025-08-29 18:30:24 - testbed-node-0 2025-08-29 18:30:24.254263 | orchestrator | 2025-08-29 18:30:24 - testbed-node-4 2025-08-29 18:30:24.353386 | orchestrator | 2025-08-29 18:30:24 - testbed-node-3 2025-08-29 18:30:24.440865 | orchestrator | 2025-08-29 18:30:24 - clean up keypairs 2025-08-29 18:30:24.458951 | orchestrator | 2025-08-29 18:30:24 - testbed 2025-08-29 18:30:24.482497 | orchestrator | 2025-08-29 18:30:24 - wait for servers to be gone 2025-08-29 18:30:35.354620 | orchestrator | 2025-08-29 18:30:35 - clean up ports 2025-08-29 18:30:35.544103 | orchestrator | 2025-08-29 18:30:35 - 6b6cc078-9130-4e68-ad9f-af68554d2bdd 2025-08-29 18:30:35.817328 | orchestrator | 2025-08-29 18:30:35 - 7d604af7-0937-4cac-8913-adab33ac86df 2025-08-29 18:30:36.119211 | orchestrator | 2025-08-29 18:30:36 - 80dbf7cc-5a40-47a3-aba2-6cda4df7274d 2025-08-29 18:30:36.377749 | orchestrator | 2025-08-29 18:30:36 - 98c6314c-de44-4bdd-acf2-bb6c55eb381d 2025-08-29 18:30:36.588858 | orchestrator | 2025-08-29 18:30:36 - a2d0f9fa-ae04-4640-a677-1ad29b62df34 2025-08-29 18:30:37.020425 | orchestrator | 2025-08-29 18:30:37 - ab60a0fe-1769-4080-ad39-4b0a8d8fe0f6 2025-08-29 18:30:37.239986 | orchestrator | 2025-08-29 18:30:37 - b60b328a-d1f4-4ff1-811f-1c22ebe6843e 2025-08-29 18:30:37.479782 | orchestrator | 2025-08-29 18:30:37 - clean up volumes 2025-08-29 18:30:37.585126 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-3-node-base 2025-08-29 18:30:37.622747 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-0-node-base 2025-08-29 18:30:37.662190 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-1-node-base 2025-08-29 18:30:37.705924 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-manager-base 2025-08-29 18:30:37.751479 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-2-node-base 2025-08-29 18:30:37.792912 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-5-node-base 2025-08-29 18:30:37.836622 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-4-node-base 2025-08-29 18:30:37.878249 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-5-node-5 2025-08-29 18:30:37.921758 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-2-node-5 2025-08-29 18:30:37.964102 | orchestrator | 2025-08-29 18:30:37 - testbed-volume-8-node-5 2025-08-29 18:30:38.010214 | orchestrator | 2025-08-29 18:30:38 - testbed-volume-1-node-4 2025-08-29 18:30:38.059496 | orchestrator | 2025-08-29 18:30:38 - testbed-volume-7-node-4 2025-08-29 18:30:38.103057 | orchestrator | 2025-08-29 18:30:38 - testbed-volume-0-node-3 2025-08-29 18:30:38.147847 | orchestrator | 2025-08-29 18:30:38 - testbed-volume-4-node-4 2025-08-29 18:30:38.187812 | orchestrator | 2025-08-29 18:30:38 - testbed-volume-6-node-3 2025-08-29 18:30:38.234330 | orchestrator | 2025-08-29 18:30:38 - testbed-volume-3-node-3 2025-08-29 18:30:38.280210 | orchestrator | 2025-08-29 18:30:38 - disconnect routers 2025-08-29 18:30:38.384627 | orchestrator | 2025-08-29 18:30:38 - testbed 2025-08-29 18:30:39.272754 | orchestrator | 2025-08-29 18:30:39 - clean up subnets 2025-08-29 18:30:39.314710 | orchestrator | 2025-08-29 18:30:39 - subnet-testbed-management 2025-08-29 18:30:39.494916 | orchestrator | 2025-08-29 18:30:39 - clean up networks 2025-08-29 18:30:39.701461 | orchestrator | 2025-08-29 18:30:39 - net-testbed-management 2025-08-29 18:30:39.981067 | orchestrator | 2025-08-29 18:30:39 - clean up security groups 2025-08-29 18:30:40.027772 | orchestrator | 2025-08-29 18:30:40 - testbed-node 2025-08-29 18:30:40.141708 | orchestrator | 2025-08-29 18:30:40 - testbed-management 2025-08-29 18:30:40.292202 | orchestrator | 2025-08-29 18:30:40 - clean up floating ips 2025-08-29 18:30:40.326171 | orchestrator | 2025-08-29 18:30:40 - 81.163.193.57 2025-08-29 18:30:41.186539 | orchestrator | 2025-08-29 18:30:41 - clean up routers 2025-08-29 18:30:41.253760 | orchestrator | 2025-08-29 18:30:41 - testbed 2025-08-29 18:30:42.649004 | orchestrator | ok: Runtime: 0:00:20.257972 2025-08-29 18:30:42.651623 | 2025-08-29 18:30:42.651726 | PLAY RECAP 2025-08-29 18:30:42.651794 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 18:30:42.651876 | 2025-08-29 18:30:42.809761 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 18:30:42.812183 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 18:30:43.578776 | 2025-08-29 18:30:43.578993 | PLAY [Cleanup play] 2025-08-29 18:30:43.596006 | 2025-08-29 18:30:43.596181 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 18:30:43.649443 | orchestrator | ok 2025-08-29 18:30:43.658616 | 2025-08-29 18:30:43.658754 | TASK [Set cloud fact (local deployment)] 2025-08-29 18:30:43.684368 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:43.698047 | 2025-08-29 18:30:43.698189 | TASK [Clean the cloud environment] 2025-08-29 18:30:44.857033 | orchestrator | 2025-08-29 18:30:44 - clean up servers 2025-08-29 18:30:45.452713 | orchestrator | 2025-08-29 18:30:45 - clean up keypairs 2025-08-29 18:30:45.471479 | orchestrator | 2025-08-29 18:30:45 - wait for servers to be gone 2025-08-29 18:30:45.512377 | orchestrator | 2025-08-29 18:30:45 - clean up ports 2025-08-29 18:30:45.590790 | orchestrator | 2025-08-29 18:30:45 - clean up volumes 2025-08-29 18:30:45.646478 | orchestrator | 2025-08-29 18:30:45 - disconnect routers 2025-08-29 18:30:45.668701 | orchestrator | 2025-08-29 18:30:45 - clean up subnets 2025-08-29 18:30:45.686070 | orchestrator | 2025-08-29 18:30:45 - clean up networks 2025-08-29 18:30:45.824167 | orchestrator | 2025-08-29 18:30:45 - clean up security groups 2025-08-29 18:30:45.863877 | orchestrator | 2025-08-29 18:30:45 - clean up floating ips 2025-08-29 18:30:45.889011 | orchestrator | 2025-08-29 18:30:45 - clean up routers 2025-08-29 18:30:46.241199 | orchestrator | ok: Runtime: 0:00:01.408366 2025-08-29 18:30:46.243262 | 2025-08-29 18:30:46.243364 | PLAY RECAP 2025-08-29 18:30:46.243424 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 18:30:46.243451 | 2025-08-29 18:30:46.372239 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 18:30:46.374733 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 18:30:47.143172 | 2025-08-29 18:30:47.143339 | PLAY [Base post-fetch] 2025-08-29 18:30:47.158868 | 2025-08-29 18:30:47.159006 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 18:30:47.215056 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:47.230098 | 2025-08-29 18:30:47.230297 | TASK [fetch-output : Set log path for single node] 2025-08-29 18:30:47.270601 | orchestrator | ok 2025-08-29 18:30:47.279215 | 2025-08-29 18:30:47.279348 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 18:30:47.801352 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/work/logs" 2025-08-29 18:30:48.080102 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/work/artifacts" 2025-08-29 18:30:48.354796 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d6c29d1409ff413595935ce080b46e42/work/docs" 2025-08-29 18:30:48.369803 | 2025-08-29 18:30:48.369981 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 18:30:49.328288 | orchestrator | changed: .d..t...... ./ 2025-08-29 18:30:49.328595 | orchestrator | changed: All items complete 2025-08-29 18:30:49.328647 | 2025-08-29 18:30:50.047859 | orchestrator | changed: .d..t...... ./ 2025-08-29 18:30:50.754054 | orchestrator | changed: .d..t...... ./ 2025-08-29 18:30:50.783581 | 2025-08-29 18:30:50.783732 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 18:30:50.819567 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:50.822491 | orchestrator | skipping: Conditional result was False 2025-08-29 18:30:50.845712 | 2025-08-29 18:30:50.845854 | PLAY RECAP 2025-08-29 18:30:50.845940 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 18:30:50.845984 | 2025-08-29 18:30:50.979660 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 18:30:50.982282 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 18:30:51.701646 | 2025-08-29 18:30:51.701810 | PLAY [Base post] 2025-08-29 18:30:51.716550 | 2025-08-29 18:30:51.716685 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 18:30:52.575980 | orchestrator | changed 2025-08-29 18:30:52.585484 | 2025-08-29 18:30:52.585604 | PLAY RECAP 2025-08-29 18:30:52.585673 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 18:30:52.585742 | 2025-08-29 18:30:52.708684 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 18:30:52.709742 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 18:30:53.504535 | 2025-08-29 18:30:53.504699 | PLAY [Base post-logs] 2025-08-29 18:30:53.515043 | 2025-08-29 18:30:53.515170 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 18:30:54.040374 | localhost | changed 2025-08-29 18:30:54.054706 | 2025-08-29 18:30:54.054920 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 18:30:54.094511 | localhost | ok 2025-08-29 18:30:54.101603 | 2025-08-29 18:30:54.101776 | TASK [Set zuul-log-path fact] 2025-08-29 18:30:54.119104 | localhost | ok 2025-08-29 18:30:54.127340 | 2025-08-29 18:30:54.127456 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 18:30:54.153114 | localhost | ok 2025-08-29 18:30:54.156533 | 2025-08-29 18:30:54.156644 | TASK [upload-logs : Create log directories] 2025-08-29 18:30:54.663424 | localhost | changed 2025-08-29 18:30:54.668210 | 2025-08-29 18:30:54.668369 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 18:30:55.180879 | localhost -> localhost | ok: Runtime: 0:00:00.006959 2025-08-29 18:30:55.185087 | 2025-08-29 18:30:55.185209 | TASK [upload-logs : Upload logs to log server] 2025-08-29 18:30:55.748990 | localhost | Output suppressed because no_log was given 2025-08-29 18:30:55.751383 | 2025-08-29 18:30:55.751517 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 18:30:55.801063 | localhost | skipping: Conditional result was False 2025-08-29 18:30:55.805429 | localhost | skipping: Conditional result was False 2025-08-29 18:30:55.811821 | 2025-08-29 18:30:55.812055 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 18:30:55.860726 | localhost | skipping: Conditional result was False 2025-08-29 18:30:55.861339 | 2025-08-29 18:30:55.864919 | localhost | skipping: Conditional result was False 2025-08-29 18:30:55.878242 | 2025-08-29 18:30:55.878481 | LOOP [upload-logs : Upload console log and json output]